00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1745 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3006 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.037 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.038 The recommended git tool is: git 00:00:00.038 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.064 Fetching changes from the remote Git repository 00:00:00.066 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.100 Using shallow fetch with depth 1 00:00:00.100 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.100 > git --version # timeout=10 00:00:00.161 > git --version # 'git version 2.39.2' 00:00:00.162 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.162 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.162 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:50.181 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:50.195 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:50.208 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:00:50.208 > git config core.sparsecheckout # timeout=10 00:00:50.223 > git read-tree -mu HEAD # timeout=10 00:00:50.239 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=5 00:00:50.258 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:00:50.258 > git rev-list --no-walk 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:00:50.361 [Pipeline] Start of Pipeline 00:00:50.377 [Pipeline] library 00:00:50.378 Loading library shm_lib@master 00:00:50.379 Library shm_lib@master is cached. Copying from home. 00:00:50.400 [Pipeline] node 00:00:50.416 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:50.418 [Pipeline] { 00:00:50.432 [Pipeline] catchError 00:00:50.434 [Pipeline] { 00:00:50.447 [Pipeline] wrap 00:00:50.457 [Pipeline] { 00:00:50.465 [Pipeline] stage 00:00:50.466 [Pipeline] { (Prologue) 00:00:50.623 [Pipeline] sh 00:00:50.905 + logger -p user.info -t JENKINS-CI 00:00:50.925 [Pipeline] echo 00:00:50.927 Node: WFP39 00:00:50.936 [Pipeline] sh 00:00:51.238 [Pipeline] setCustomBuildProperty 00:00:51.251 [Pipeline] echo 00:00:51.253 Cleanup processes 00:00:51.259 [Pipeline] sh 00:00:51.541 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:51.541 1054039 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:51.554 [Pipeline] sh 00:00:51.831 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:51.831 ++ grep -v 'sudo pgrep' 00:00:51.831 ++ awk '{print $1}' 00:00:51.831 + sudo kill -9 00:00:51.831 + true 00:00:51.846 [Pipeline] cleanWs 00:00:51.856 [WS-CLEANUP] Deleting project workspace... 00:00:51.856 [WS-CLEANUP] Deferred wipeout is used... 00:00:51.862 [WS-CLEANUP] done 00:00:51.866 [Pipeline] setCustomBuildProperty 00:00:51.880 [Pipeline] sh 00:00:52.160 + sudo git config --global --replace-all safe.directory '*' 00:00:52.228 [Pipeline] nodesByLabel 00:00:52.229 Could not find any nodes with 'sorcerer' label 00:00:52.232 [Pipeline] retry 00:00:52.234 [Pipeline] { 00:00:52.253 [Pipeline] checkout 00:00:52.259 The recommended git tool is: git 00:00:52.267 using credential 00000000-0000-0000-0000-000000000002 00:00:52.271 Cloning the remote Git repository 00:00:52.273 Honoring refspec on initial clone 00:00:52.274 Cloning repository https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:52.275 > git init /var/jenkins/workspace/short-fuzz-phy-autotest/jbp # timeout=10 00:00:52.283 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:52.283 > git --version # timeout=10 00:00:52.288 > git --version # 'git version 2.43.0' 00:00:52.288 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:52.288 Setting http proxy: proxy-dmz.intel.com:911 00:00:52.288 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=10 00:03:13.669 Avoid second fetch 00:03:13.688 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:03:13.777 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:03:13.784 [Pipeline] } 00:03:13.804 [Pipeline] // retry 00:03:13.817 [Pipeline] nodesByLabel 00:03:13.819 Could not find any nodes with 'sorcerer' label 00:03:13.824 [Pipeline] retry 00:03:13.826 [Pipeline] { 00:03:13.849 [Pipeline] checkout 00:03:13.856 The recommended git tool is: NONE 00:03:13.866 using credential 00000000-0000-0000-0000-000000000002 00:03:13.872 Cloning the remote Git repository 00:03:13.875 Honoring refspec on initial clone 00:03:13.651 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:03:13.656 > git config --add remote.origin.fetch refs/heads/master # timeout=10 00:03:13.670 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:03:13.679 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:03:13.688 > git config core.sparsecheckout # timeout=10 00:03:13.693 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:03:13.876 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:03:13.877 > git init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk # timeout=10 00:03:13.885 Using reference repository: /var/ci_repos/spdk_multi 00:03:13.885 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:03:13.885 > git --version # timeout=10 00:03:13.890 > git --version # 'git version 2.43.0' 00:03:13.890 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:13.890 Setting http proxy: proxy-dmz.intel.com:911 00:03:13.891 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/heads/v24.01.x +refs/heads/master:refs/remotes/origin/master # timeout=10 00:03:54.797 Avoid second fetch 00:03:54.816 Checking out Revision 36faa8c312bf9059b86e0f503d7fd6b43c1498e6 (FETCH_HEAD) 00:03:55.026 Commit message: "bdev/nvme: Fix the case that namespace was removed during reset" 00:03:55.055 First time build. Skipping changelog. 00:03:54.775 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:03:54.781 > git config --add remote.origin.fetch refs/heads/v24.01.x # timeout=10 00:03:54.787 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:03:54.798 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:03:54.807 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:03:54.817 > git config core.sparsecheckout # timeout=10 00:03:54.822 > git checkout -f 36faa8c312bf9059b86e0f503d7fd6b43c1498e6 # timeout=10 00:03:55.027 > git rev-list --no-walk 27395820e570bad3910444111c4d7d52b3ea17ad # timeout=10 00:03:55.060 > git remote # timeout=10 00:03:55.065 > git submodule init # timeout=10 00:03:55.146 > git submodule sync # timeout=10 00:03:55.226 > git config --get remote.origin.url # timeout=10 00:03:55.234 > git submodule init # timeout=10 00:03:55.322 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:03:55.328 > git config --get submodule.dpdk.url # timeout=10 00:03:55.333 > git remote # timeout=10 00:03:55.336 > git config --get remote.origin.url # timeout=10 00:03:55.342 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:03:55.347 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:03:55.352 > git remote # timeout=10 00:03:55.357 > git config --get remote.origin.url # timeout=10 00:03:55.362 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:03:55.367 > git config --get submodule.isa-l.url # timeout=10 00:03:55.373 > git remote # timeout=10 00:03:55.376 > git config --get remote.origin.url # timeout=10 00:03:55.381 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:03:55.386 > git config --get submodule.ocf.url # timeout=10 00:03:55.391 > git remote # timeout=10 00:03:55.396 > git config --get remote.origin.url # timeout=10 00:03:55.401 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:03:55.406 > git config --get submodule.libvfio-user.url # timeout=10 00:03:55.410 > git remote # timeout=10 00:03:55.415 > git config --get remote.origin.url # timeout=10 00:03:55.420 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:03:55.425 > git config --get submodule.xnvme.url # timeout=10 00:03:55.430 > git remote # timeout=10 00:03:55.435 > git config --get remote.origin.url # timeout=10 00:03:55.440 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:03:55.445 > git config --get submodule.isa-l-crypto.url # timeout=10 00:03:55.449 > git remote # timeout=10 00:03:55.454 > git config --get remote.origin.url # timeout=10 00:03:55.459 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:03:55.466 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:55.466 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:55.466 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:55.466 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:55.466 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:55.466 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:55.466 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:55.466 Setting http proxy: proxy-dmz.intel.com:911 00:03:55.466 Setting http proxy: proxy-dmz.intel.com:911 00:03:55.466 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:03:55.466 Setting http proxy: proxy-dmz.intel.com:911 00:03:55.466 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:03:55.466 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:03:55.467 Setting http proxy: proxy-dmz.intel.com:911 00:03:55.467 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:03:55.467 Setting http proxy: proxy-dmz.intel.com:911 00:03:55.467 Setting http proxy: proxy-dmz.intel.com:911 00:03:55.467 Setting http proxy: proxy-dmz.intel.com:911 00:03:55.467 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:03:55.467 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:03:55.467 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:04:03.586 [Pipeline] } 00:04:03.607 [Pipeline] // retry 00:04:03.616 [Pipeline] sh 00:04:03.898 + git -C spdk log --oneline -n5 00:04:03.898 36faa8c312b bdev/nvme: Fix the case that namespace was removed during reset 00:04:03.898 e2cb5a5eed9 bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:04:03.898 4b134b4abdb bdev/nvme: Delay callbacks when the next operation is a failover 00:04:03.898 d2ea4ecb14a llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:04:03.898 3b33f433344 test/nvme/cuse: Fix typo 00:04:03.910 [Pipeline] } 00:04:03.928 [Pipeline] // stage 00:04:03.937 [Pipeline] stage 00:04:03.939 [Pipeline] { (Prepare) 00:04:03.959 [Pipeline] writeFile 00:04:03.977 [Pipeline] sh 00:04:04.257 + logger -p user.info -t JENKINS-CI 00:04:04.269 [Pipeline] sh 00:04:04.550 + logger -p user.info -t JENKINS-CI 00:04:04.562 [Pipeline] sh 00:04:04.841 + cat autorun-spdk.conf 00:04:04.841 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:04.841 SPDK_TEST_FUZZER_SHORT=1 00:04:04.841 SPDK_TEST_FUZZER=1 00:04:04.841 SPDK_RUN_UBSAN=1 00:04:04.849 RUN_NIGHTLY=1 00:04:04.853 [Pipeline] readFile 00:04:04.878 [Pipeline] withEnv 00:04:04.880 [Pipeline] { 00:04:04.896 [Pipeline] sh 00:04:05.178 + set -ex 00:04:05.178 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:04:05.178 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:04:05.178 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:05.178 ++ SPDK_TEST_FUZZER_SHORT=1 00:04:05.178 ++ SPDK_TEST_FUZZER=1 00:04:05.178 ++ SPDK_RUN_UBSAN=1 00:04:05.178 ++ RUN_NIGHTLY=1 00:04:05.178 + case $SPDK_TEST_NVMF_NICS in 00:04:05.178 + DRIVERS= 00:04:05.178 + [[ -n '' ]] 00:04:05.178 + exit 0 00:04:05.187 [Pipeline] } 00:04:05.206 [Pipeline] // withEnv 00:04:05.212 [Pipeline] } 00:04:05.231 [Pipeline] // stage 00:04:05.240 [Pipeline] catchError 00:04:05.241 [Pipeline] { 00:04:05.254 [Pipeline] timeout 00:04:05.254 Timeout set to expire in 30 min 00:04:05.255 [Pipeline] { 00:04:05.269 [Pipeline] stage 00:04:05.271 [Pipeline] { (Tests) 00:04:05.284 [Pipeline] sh 00:04:05.562 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:04:05.562 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:04:05.562 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:04:05.562 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:04:05.562 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:05.562 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:04:05.562 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:04:05.562 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:04:05.562 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:04:05.562 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:04:05.562 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:04:05.562 + source /etc/os-release 00:04:05.562 ++ NAME='Fedora Linux' 00:04:05.562 ++ VERSION='38 (Cloud Edition)' 00:04:05.562 ++ ID=fedora 00:04:05.562 ++ VERSION_ID=38 00:04:05.562 ++ VERSION_CODENAME= 00:04:05.562 ++ PLATFORM_ID=platform:f38 00:04:05.562 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:04:05.562 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:05.562 ++ LOGO=fedora-logo-icon 00:04:05.562 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:04:05.562 ++ HOME_URL=https://fedoraproject.org/ 00:04:05.562 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:04:05.562 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:05.562 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:05.562 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:05.562 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:04:05.562 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:05.562 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:04:05.562 ++ SUPPORT_END=2024-05-14 00:04:05.562 ++ VARIANT='Cloud Edition' 00:04:05.562 ++ VARIANT_ID=cloud 00:04:05.562 + uname -a 00:04:05.563 Linux spdk-wfp-39 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 02:47:10 UTC 2024 x86_64 GNU/Linux 00:04:05.563 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:08.850 Hugepages 00:04:08.850 node hugesize free / total 00:04:08.850 node0 1048576kB 0 / 0 00:04:08.850 node0 2048kB 0 / 0 00:04:08.850 node1 1048576kB 0 / 0 00:04:08.850 node1 2048kB 0 / 0 00:04:08.850 00:04:08.850 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.850 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:08.850 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:08.850 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:08.850 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:08.851 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:08.851 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:08.851 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:08.851 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:08.851 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:08.851 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:08.851 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:08.851 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:08.851 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:08.851 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:08.851 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:08.851 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:08.851 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:08.851 + rm -f /tmp/spdk-ld-path 00:04:08.851 + source autorun-spdk.conf 00:04:08.851 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:08.851 ++ SPDK_TEST_FUZZER_SHORT=1 00:04:08.851 ++ SPDK_TEST_FUZZER=1 00:04:08.851 ++ SPDK_RUN_UBSAN=1 00:04:08.851 ++ RUN_NIGHTLY=1 00:04:08.851 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:08.851 + [[ -n '' ]] 00:04:08.851 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:08.851 + for M in /var/spdk/build-*-manifest.txt 00:04:08.851 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:08.851 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:04:08.851 + for M in /var/spdk/build-*-manifest.txt 00:04:08.851 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:08.851 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:04:08.851 ++ uname 00:04:08.851 + [[ Linux == \L\i\n\u\x ]] 00:04:08.851 + sudo dmesg -T 00:04:08.851 + sudo dmesg --clear 00:04:08.851 + dmesg_pid=1056185 00:04:08.851 + [[ Fedora Linux == FreeBSD ]] 00:04:08.851 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:08.851 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:08.851 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:08.851 + [[ -x /usr/src/fio-static/fio ]] 00:04:08.851 + export FIO_BIN=/usr/src/fio-static/fio 00:04:08.851 + FIO_BIN=/usr/src/fio-static/fio 00:04:08.851 + sudo dmesg -Tw 00:04:08.851 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:08.851 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:08.851 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:08.851 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:08.851 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:08.851 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:08.851 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:08.851 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:08.851 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:04:08.851 Test configuration: 00:04:08.851 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:08.851 SPDK_TEST_FUZZER_SHORT=1 00:04:08.851 SPDK_TEST_FUZZER=1 00:04:08.851 SPDK_RUN_UBSAN=1 00:04:08.851 RUN_NIGHTLY=1 09:59:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:08.851 09:59:21 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:08.851 09:59:21 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.851 09:59:21 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.851 09:59:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.851 09:59:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.851 09:59:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.851 09:59:21 -- paths/export.sh@5 -- $ export PATH 00:04:08.851 09:59:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.851 09:59:21 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:04:08.851 09:59:21 -- common/autobuild_common.sh@435 -- $ date +%s 00:04:08.851 09:59:21 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713945561.XXXXXX 00:04:08.851 09:59:21 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713945561.mVfMuR 00:04:08.851 09:59:21 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:04:08.851 09:59:21 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:04:08.851 09:59:21 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:04:08.851 09:59:21 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:08.851 09:59:21 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:08.851 09:59:21 -- common/autobuild_common.sh@451 -- $ get_config_params 00:04:08.851 09:59:21 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:04:08.851 09:59:21 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.851 09:59:21 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:08.851 09:59:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:08.851 09:59:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:08.851 09:59:21 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:08.851 09:59:21 -- spdk/autobuild.sh@16 -- $ date -u 00:04:08.851 Wed Apr 24 07:59:22 AM UTC 2024 00:04:08.851 09:59:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:08.851 LTS-24-g36faa8c312b 00:04:08.851 09:59:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:08.851 09:59:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:08.851 09:59:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:08.851 09:59:22 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:04:08.851 09:59:22 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:04:08.851 09:59:22 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.851 ************************************ 00:04:08.851 START TEST ubsan 00:04:08.851 ************************************ 00:04:08.851 09:59:22 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:04:08.851 using ubsan 00:04:08.851 00:04:08.851 real 0m0.000s 00:04:08.851 user 0m0.000s 00:04:08.851 sys 0m0.000s 00:04:08.851 09:59:22 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:08.851 09:59:22 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.851 ************************************ 00:04:08.851 END TEST ubsan 00:04:08.851 ************************************ 00:04:08.851 09:59:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:08.851 09:59:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:08.851 09:59:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:08.851 09:59:22 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:04:08.851 09:59:22 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:04:08.851 09:59:22 -- common/autobuild_common.sh@423 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:04:08.851 09:59:22 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:04:08.851 09:59:22 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:04:08.851 09:59:22 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.851 ************************************ 00:04:08.851 START TEST autobuild_llvm_precompile 00:04:08.851 ************************************ 00:04:08.851 09:59:22 -- common/autotest_common.sh@1104 -- $ _llvm_precompile 00:04:08.851 09:59:22 -- common/autobuild_common.sh@32 -- $ clang --version 00:04:08.851 09:59:22 -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:04:08.851 Target: x86_64-redhat-linux-gnu 00:04:08.851 Thread model: posix 00:04:08.851 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:04:08.851 09:59:22 -- common/autobuild_common.sh@33 -- $ clang_num=16 00:04:08.851 09:59:22 -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:04:08.851 09:59:22 -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:04:08.851 09:59:22 -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:04:08.851 09:59:22 -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:04:08.851 09:59:22 -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a) 00:04:08.851 09:59:22 -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:04:08.851 09:59:22 -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:04:08.851 09:59:22 -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:04:08.851 09:59:22 -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:04:09.419 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:04:09.419 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:04:09.677 Using 'verbs' RDMA provider 00:04:25.519 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:04:37.733 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:04:37.733 Creating mk/config.mk...done. 00:04:37.733 Creating mk/cc.flags.mk...done. 00:04:37.733 Type 'make' to build. 00:04:37.733 00:04:37.733 real 0m28.432s 00:04:37.733 user 0m12.377s 00:04:37.733 sys 0m15.397s 00:04:37.733 09:59:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:37.733 09:59:50 -- common/autotest_common.sh@10 -- $ set +x 00:04:37.733 ************************************ 00:04:37.733 END TEST autobuild_llvm_precompile 00:04:37.733 ************************************ 00:04:37.733 09:59:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:37.733 09:59:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:37.733 09:59:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:37.733 09:59:50 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:04:37.733 09:59:50 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:04:37.733 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:04:37.733 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:04:37.991 Using 'verbs' RDMA provider 00:04:51.135 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:05:03.338 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:05:03.338 Creating mk/config.mk...done. 00:05:03.338 Creating mk/cc.flags.mk...done. 00:05:03.338 Type 'make' to build. 00:05:03.338 10:00:15 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:05:03.338 10:00:15 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:05:03.338 10:00:15 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:05:03.338 10:00:15 -- common/autotest_common.sh@10 -- $ set +x 00:05:03.338 ************************************ 00:05:03.338 START TEST make 00:05:03.338 ************************************ 00:05:03.338 10:00:15 -- common/autotest_common.sh@1104 -- $ make -j72 00:05:03.338 make[1]: Nothing to be done for 'all'. 00:05:04.722 The Meson build system 00:05:04.722 Version: 1.3.1 00:05:04.722 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:05:04.722 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:04.722 Build type: native build 00:05:04.722 Project name: libvfio-user 00:05:04.722 Project version: 0.0.1 00:05:04.722 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:05:04.722 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:05:04.722 Host machine cpu family: x86_64 00:05:04.722 Host machine cpu: x86_64 00:05:04.722 Run-time dependency threads found: YES 00:05:04.722 Library dl found: YES 00:05:04.722 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:04.722 Run-time dependency json-c found: YES 0.17 00:05:04.722 Run-time dependency cmocka found: YES 1.1.7 00:05:04.722 Program pytest-3 found: NO 00:05:04.722 Program flake8 found: NO 00:05:04.722 Program misspell-fixer found: NO 00:05:04.722 Program restructuredtext-lint found: NO 00:05:04.722 Program valgrind found: YES (/usr/bin/valgrind) 00:05:04.722 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:04.722 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:04.722 Compiler for C supports arguments -Wwrite-strings: YES 00:05:04.722 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:04.722 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:04.722 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:04.722 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:04.722 Build targets in project: 8 00:05:04.722 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:04.722 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:04.722 00:05:04.722 libvfio-user 0.0.1 00:05:04.722 00:05:04.722 User defined options 00:05:04.722 buildtype : debug 00:05:04.722 default_library: static 00:05:04.722 libdir : /usr/local/lib 00:05:04.722 00:05:04.722 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:04.982 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:04.982 [1/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:05:04.982 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:05:04.982 [3/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:04.982 [4/36] Compiling C object samples/null.p/null.c.o 00:05:04.982 [5/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:04.982 [6/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:04.982 [7/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:05:04.982 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:05:04.982 [9/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:05:04.982 [10/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:04.982 [11/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:04.982 [12/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:04.982 [13/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:05:04.982 [14/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:04.982 [15/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:05:04.982 [16/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:04.982 [17/36] Compiling C object test/unit_tests.p/mocks.c.o 00:05:04.982 [18/36] Compiling C object samples/server.p/server.c.o 00:05:04.982 [19/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:04.982 [20/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:04.982 [21/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:04.982 [22/36] Compiling C object samples/client.p/client.c.o 00:05:04.982 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:04.982 [24/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:04.982 [25/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:05:04.982 [26/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:05.241 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:05:05.241 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:05.241 [29/36] Linking target samples/client 00:05:05.241 [30/36] Linking static target lib/libvfio-user.a 00:05:05.241 [31/36] Linking target test/unit_tests 00:05:05.241 [32/36] Linking target samples/server 00:05:05.241 [33/36] Linking target samples/null 00:05:05.241 [34/36] Linking target samples/lspci 00:05:05.241 [35/36] Linking target samples/shadow_ioeventfd_server 00:05:05.241 [36/36] Linking target samples/gpio-pci-idio-16 00:05:05.241 INFO: autodetecting backend as ninja 00:05:05.241 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:05.241 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:05.500 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:05.500 ninja: no work to do. 00:05:10.788 The Meson build system 00:05:10.788 Version: 1.3.1 00:05:10.788 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:05:10.788 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:05:10.788 Build type: native build 00:05:10.788 Program cat found: YES (/usr/bin/cat) 00:05:10.788 Project name: DPDK 00:05:10.788 Project version: 23.11.0 00:05:10.788 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:05:10.788 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:05:10.788 Host machine cpu family: x86_64 00:05:10.788 Host machine cpu: x86_64 00:05:10.788 Message: ## Building in Developer Mode ## 00:05:10.788 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:10.788 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:10.788 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:10.788 Program python3 found: YES (/usr/bin/python3) 00:05:10.788 Program cat found: YES (/usr/bin/cat) 00:05:10.788 Compiler for C supports arguments -march=native: YES 00:05:10.788 Checking for size of "void *" : 8 00:05:10.788 Checking for size of "void *" : 8 (cached) 00:05:10.788 Library m found: YES 00:05:10.788 Library numa found: YES 00:05:10.788 Has header "numaif.h" : YES 00:05:10.788 Library fdt found: NO 00:05:10.788 Library execinfo found: NO 00:05:10.788 Has header "execinfo.h" : YES 00:05:10.788 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:10.788 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:10.788 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:10.788 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:10.788 Run-time dependency openssl found: YES 3.0.9 00:05:10.788 Run-time dependency libpcap found: YES 1.10.4 00:05:10.788 Has header "pcap.h" with dependency libpcap: YES 00:05:10.788 Compiler for C supports arguments -Wcast-qual: YES 00:05:10.788 Compiler for C supports arguments -Wdeprecated: YES 00:05:10.788 Compiler for C supports arguments -Wformat: YES 00:05:10.788 Compiler for C supports arguments -Wformat-nonliteral: YES 00:05:10.788 Compiler for C supports arguments -Wformat-security: YES 00:05:10.788 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:10.788 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:10.788 Compiler for C supports arguments -Wnested-externs: YES 00:05:10.788 Compiler for C supports arguments -Wold-style-definition: YES 00:05:10.788 Compiler for C supports arguments -Wpointer-arith: YES 00:05:10.788 Compiler for C supports arguments -Wsign-compare: YES 00:05:10.788 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:10.788 Compiler for C supports arguments -Wundef: YES 00:05:10.788 Compiler for C supports arguments -Wwrite-strings: YES 00:05:10.788 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:10.788 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:05:10.788 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:10.788 Program objdump found: YES (/usr/bin/objdump) 00:05:10.788 Compiler for C supports arguments -mavx512f: YES 00:05:10.788 Checking if "AVX512 checking" compiles: YES 00:05:10.788 Fetching value of define "__SSE4_2__" : 1 00:05:10.788 Fetching value of define "__AES__" : 1 00:05:10.788 Fetching value of define "__AVX__" : 1 00:05:10.788 Fetching value of define "__AVX2__" : 1 00:05:10.788 Fetching value of define "__AVX512BW__" : 1 00:05:10.788 Fetching value of define "__AVX512CD__" : 1 00:05:10.788 Fetching value of define "__AVX512DQ__" : 1 00:05:10.788 Fetching value of define "__AVX512F__" : 1 00:05:10.788 Fetching value of define "__AVX512VL__" : 1 00:05:10.788 Fetching value of define "__PCLMUL__" : 1 00:05:10.788 Fetching value of define "__RDRND__" : 1 00:05:10.788 Fetching value of define "__RDSEED__" : 1 00:05:10.788 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:10.788 Fetching value of define "__znver1__" : (undefined) 00:05:10.788 Fetching value of define "__znver2__" : (undefined) 00:05:10.788 Fetching value of define "__znver3__" : (undefined) 00:05:10.788 Fetching value of define "__znver4__" : (undefined) 00:05:10.788 Compiler for C supports arguments -Wno-format-truncation: NO 00:05:10.788 Message: lib/log: Defining dependency "log" 00:05:10.788 Message: lib/kvargs: Defining dependency "kvargs" 00:05:10.788 Message: lib/telemetry: Defining dependency "telemetry" 00:05:10.788 Checking for function "getentropy" : NO 00:05:10.788 Message: lib/eal: Defining dependency "eal" 00:05:10.788 Message: lib/ring: Defining dependency "ring" 00:05:10.788 Message: lib/rcu: Defining dependency "rcu" 00:05:10.788 Message: lib/mempool: Defining dependency "mempool" 00:05:10.788 Message: lib/mbuf: Defining dependency "mbuf" 00:05:10.788 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:10.788 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:10.788 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:10.788 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:10.788 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:10.788 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:10.788 Compiler for C supports arguments -mpclmul: YES 00:05:10.788 Compiler for C supports arguments -maes: YES 00:05:10.788 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:10.788 Compiler for C supports arguments -mavx512bw: YES 00:05:10.788 Compiler for C supports arguments -mavx512dq: YES 00:05:10.788 Compiler for C supports arguments -mavx512vl: YES 00:05:10.788 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:10.788 Compiler for C supports arguments -mavx2: YES 00:05:10.788 Compiler for C supports arguments -mavx: YES 00:05:10.788 Message: lib/net: Defining dependency "net" 00:05:10.788 Message: lib/meter: Defining dependency "meter" 00:05:10.788 Message: lib/ethdev: Defining dependency "ethdev" 00:05:10.788 Message: lib/pci: Defining dependency "pci" 00:05:10.788 Message: lib/cmdline: Defining dependency "cmdline" 00:05:10.788 Message: lib/hash: Defining dependency "hash" 00:05:10.788 Message: lib/timer: Defining dependency "timer" 00:05:10.788 Message: lib/compressdev: Defining dependency "compressdev" 00:05:10.788 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:10.788 Message: lib/dmadev: Defining dependency "dmadev" 00:05:10.788 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:10.788 Message: lib/power: Defining dependency "power" 00:05:10.788 Message: lib/reorder: Defining dependency "reorder" 00:05:10.788 Message: lib/security: Defining dependency "security" 00:05:10.788 Has header "linux/userfaultfd.h" : YES 00:05:10.788 Has header "linux/vduse.h" : YES 00:05:10.788 Message: lib/vhost: Defining dependency "vhost" 00:05:10.788 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:05:10.789 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:10.789 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:10.789 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:10.789 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:10.789 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:10.789 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:10.789 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:10.789 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:10.789 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:10.789 Program doxygen found: YES (/usr/bin/doxygen) 00:05:10.789 Configuring doxy-api-html.conf using configuration 00:05:10.789 Configuring doxy-api-man.conf using configuration 00:05:10.789 Program mandb found: YES (/usr/bin/mandb) 00:05:10.789 Program sphinx-build found: NO 00:05:10.789 Configuring rte_build_config.h using configuration 00:05:10.789 Message: 00:05:10.789 ================= 00:05:10.789 Applications Enabled 00:05:10.789 ================= 00:05:10.789 00:05:10.789 apps: 00:05:10.789 00:05:10.789 00:05:10.789 Message: 00:05:10.789 ================= 00:05:10.789 Libraries Enabled 00:05:10.789 ================= 00:05:10.789 00:05:10.789 libs: 00:05:10.789 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:10.789 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:10.789 cryptodev, dmadev, power, reorder, security, vhost, 00:05:10.789 00:05:10.789 Message: 00:05:10.789 =============== 00:05:10.789 Drivers Enabled 00:05:10.789 =============== 00:05:10.789 00:05:10.789 common: 00:05:10.789 00:05:10.789 bus: 00:05:10.789 pci, vdev, 00:05:10.789 mempool: 00:05:10.789 ring, 00:05:10.789 dma: 00:05:10.789 00:05:10.789 net: 00:05:10.789 00:05:10.789 crypto: 00:05:10.789 00:05:10.789 compress: 00:05:10.789 00:05:10.789 vdpa: 00:05:10.789 00:05:10.789 00:05:10.789 Message: 00:05:10.789 ================= 00:05:10.789 Content Skipped 00:05:10.789 ================= 00:05:10.789 00:05:10.789 apps: 00:05:10.789 dumpcap: explicitly disabled via build config 00:05:10.789 graph: explicitly disabled via build config 00:05:10.789 pdump: explicitly disabled via build config 00:05:10.789 proc-info: explicitly disabled via build config 00:05:10.789 test-acl: explicitly disabled via build config 00:05:10.789 test-bbdev: explicitly disabled via build config 00:05:10.789 test-cmdline: explicitly disabled via build config 00:05:10.789 test-compress-perf: explicitly disabled via build config 00:05:10.789 test-crypto-perf: explicitly disabled via build config 00:05:10.789 test-dma-perf: explicitly disabled via build config 00:05:10.789 test-eventdev: explicitly disabled via build config 00:05:10.789 test-fib: explicitly disabled via build config 00:05:10.789 test-flow-perf: explicitly disabled via build config 00:05:10.789 test-gpudev: explicitly disabled via build config 00:05:10.789 test-mldev: explicitly disabled via build config 00:05:10.789 test-pipeline: explicitly disabled via build config 00:05:10.789 test-pmd: explicitly disabled via build config 00:05:10.789 test-regex: explicitly disabled via build config 00:05:10.789 test-sad: explicitly disabled via build config 00:05:10.789 test-security-perf: explicitly disabled via build config 00:05:10.789 00:05:10.789 libs: 00:05:10.789 metrics: explicitly disabled via build config 00:05:10.789 acl: explicitly disabled via build config 00:05:10.789 bbdev: explicitly disabled via build config 00:05:10.789 bitratestats: explicitly disabled via build config 00:05:10.789 bpf: explicitly disabled via build config 00:05:10.789 cfgfile: explicitly disabled via build config 00:05:10.789 distributor: explicitly disabled via build config 00:05:10.789 efd: explicitly disabled via build config 00:05:10.789 eventdev: explicitly disabled via build config 00:05:10.789 dispatcher: explicitly disabled via build config 00:05:10.789 gpudev: explicitly disabled via build config 00:05:10.789 gro: explicitly disabled via build config 00:05:10.789 gso: explicitly disabled via build config 00:05:10.789 ip_frag: explicitly disabled via build config 00:05:10.789 jobstats: explicitly disabled via build config 00:05:10.789 latencystats: explicitly disabled via build config 00:05:10.789 lpm: explicitly disabled via build config 00:05:10.789 member: explicitly disabled via build config 00:05:10.789 pcapng: explicitly disabled via build config 00:05:10.789 rawdev: explicitly disabled via build config 00:05:10.789 regexdev: explicitly disabled via build config 00:05:10.789 mldev: explicitly disabled via build config 00:05:10.789 rib: explicitly disabled via build config 00:05:10.789 sched: explicitly disabled via build config 00:05:10.789 stack: explicitly disabled via build config 00:05:10.789 ipsec: explicitly disabled via build config 00:05:10.789 pdcp: explicitly disabled via build config 00:05:10.789 fib: explicitly disabled via build config 00:05:10.789 port: explicitly disabled via build config 00:05:10.789 pdump: explicitly disabled via build config 00:05:10.789 table: explicitly disabled via build config 00:05:10.789 pipeline: explicitly disabled via build config 00:05:10.789 graph: explicitly disabled via build config 00:05:10.789 node: explicitly disabled via build config 00:05:10.789 00:05:10.789 drivers: 00:05:10.789 common/cpt: not in enabled drivers build config 00:05:10.789 common/dpaax: not in enabled drivers build config 00:05:10.789 common/iavf: not in enabled drivers build config 00:05:10.789 common/idpf: not in enabled drivers build config 00:05:10.789 common/mvep: not in enabled drivers build config 00:05:10.789 common/octeontx: not in enabled drivers build config 00:05:10.789 bus/auxiliary: not in enabled drivers build config 00:05:10.789 bus/cdx: not in enabled drivers build config 00:05:10.789 bus/dpaa: not in enabled drivers build config 00:05:10.789 bus/fslmc: not in enabled drivers build config 00:05:10.789 bus/ifpga: not in enabled drivers build config 00:05:10.789 bus/platform: not in enabled drivers build config 00:05:10.789 bus/vmbus: not in enabled drivers build config 00:05:10.789 common/cnxk: not in enabled drivers build config 00:05:10.789 common/mlx5: not in enabled drivers build config 00:05:10.789 common/nfp: not in enabled drivers build config 00:05:10.789 common/qat: not in enabled drivers build config 00:05:10.789 common/sfc_efx: not in enabled drivers build config 00:05:10.789 mempool/bucket: not in enabled drivers build config 00:05:10.789 mempool/cnxk: not in enabled drivers build config 00:05:10.789 mempool/dpaa: not in enabled drivers build config 00:05:10.789 mempool/dpaa2: not in enabled drivers build config 00:05:10.789 mempool/octeontx: not in enabled drivers build config 00:05:10.789 mempool/stack: not in enabled drivers build config 00:05:10.789 dma/cnxk: not in enabled drivers build config 00:05:10.789 dma/dpaa: not in enabled drivers build config 00:05:10.789 dma/dpaa2: not in enabled drivers build config 00:05:10.789 dma/hisilicon: not in enabled drivers build config 00:05:10.789 dma/idxd: not in enabled drivers build config 00:05:10.789 dma/ioat: not in enabled drivers build config 00:05:10.789 dma/skeleton: not in enabled drivers build config 00:05:10.789 net/af_packet: not in enabled drivers build config 00:05:10.789 net/af_xdp: not in enabled drivers build config 00:05:10.789 net/ark: not in enabled drivers build config 00:05:10.789 net/atlantic: not in enabled drivers build config 00:05:10.789 net/avp: not in enabled drivers build config 00:05:10.789 net/axgbe: not in enabled drivers build config 00:05:10.789 net/bnx2x: not in enabled drivers build config 00:05:10.789 net/bnxt: not in enabled drivers build config 00:05:10.789 net/bonding: not in enabled drivers build config 00:05:10.789 net/cnxk: not in enabled drivers build config 00:05:10.789 net/cpfl: not in enabled drivers build config 00:05:10.789 net/cxgbe: not in enabled drivers build config 00:05:10.789 net/dpaa: not in enabled drivers build config 00:05:10.789 net/dpaa2: not in enabled drivers build config 00:05:10.789 net/e1000: not in enabled drivers build config 00:05:10.789 net/ena: not in enabled drivers build config 00:05:10.789 net/enetc: not in enabled drivers build config 00:05:10.789 net/enetfec: not in enabled drivers build config 00:05:10.789 net/enic: not in enabled drivers build config 00:05:10.789 net/failsafe: not in enabled drivers build config 00:05:10.789 net/fm10k: not in enabled drivers build config 00:05:10.789 net/gve: not in enabled drivers build config 00:05:10.789 net/hinic: not in enabled drivers build config 00:05:10.789 net/hns3: not in enabled drivers build config 00:05:10.789 net/i40e: not in enabled drivers build config 00:05:10.789 net/iavf: not in enabled drivers build config 00:05:10.789 net/ice: not in enabled drivers build config 00:05:10.789 net/idpf: not in enabled drivers build config 00:05:10.789 net/igc: not in enabled drivers build config 00:05:10.789 net/ionic: not in enabled drivers build config 00:05:10.789 net/ipn3ke: not in enabled drivers build config 00:05:10.789 net/ixgbe: not in enabled drivers build config 00:05:10.789 net/mana: not in enabled drivers build config 00:05:10.789 net/memif: not in enabled drivers build config 00:05:10.789 net/mlx4: not in enabled drivers build config 00:05:10.789 net/mlx5: not in enabled drivers build config 00:05:10.789 net/mvneta: not in enabled drivers build config 00:05:10.789 net/mvpp2: not in enabled drivers build config 00:05:10.789 net/netvsc: not in enabled drivers build config 00:05:10.789 net/nfb: not in enabled drivers build config 00:05:10.789 net/nfp: not in enabled drivers build config 00:05:10.789 net/ngbe: not in enabled drivers build config 00:05:10.789 net/null: not in enabled drivers build config 00:05:10.789 net/octeontx: not in enabled drivers build config 00:05:10.789 net/octeon_ep: not in enabled drivers build config 00:05:10.789 net/pcap: not in enabled drivers build config 00:05:10.790 net/pfe: not in enabled drivers build config 00:05:10.790 net/qede: not in enabled drivers build config 00:05:10.790 net/ring: not in enabled drivers build config 00:05:10.790 net/sfc: not in enabled drivers build config 00:05:10.790 net/softnic: not in enabled drivers build config 00:05:10.790 net/tap: not in enabled drivers build config 00:05:10.790 net/thunderx: not in enabled drivers build config 00:05:10.790 net/txgbe: not in enabled drivers build config 00:05:10.790 net/vdev_netvsc: not in enabled drivers build config 00:05:10.790 net/vhost: not in enabled drivers build config 00:05:10.790 net/virtio: not in enabled drivers build config 00:05:10.790 net/vmxnet3: not in enabled drivers build config 00:05:10.790 raw/*: missing internal dependency, "rawdev" 00:05:10.790 crypto/armv8: not in enabled drivers build config 00:05:10.790 crypto/bcmfs: not in enabled drivers build config 00:05:10.790 crypto/caam_jr: not in enabled drivers build config 00:05:10.790 crypto/ccp: not in enabled drivers build config 00:05:10.790 crypto/cnxk: not in enabled drivers build config 00:05:10.790 crypto/dpaa_sec: not in enabled drivers build config 00:05:10.790 crypto/dpaa2_sec: not in enabled drivers build config 00:05:10.790 crypto/ipsec_mb: not in enabled drivers build config 00:05:10.790 crypto/mlx5: not in enabled drivers build config 00:05:10.790 crypto/mvsam: not in enabled drivers build config 00:05:10.790 crypto/nitrox: not in enabled drivers build config 00:05:10.790 crypto/null: not in enabled drivers build config 00:05:10.790 crypto/octeontx: not in enabled drivers build config 00:05:10.790 crypto/openssl: not in enabled drivers build config 00:05:10.790 crypto/scheduler: not in enabled drivers build config 00:05:10.790 crypto/uadk: not in enabled drivers build config 00:05:10.790 crypto/virtio: not in enabled drivers build config 00:05:10.790 compress/isal: not in enabled drivers build config 00:05:10.790 compress/mlx5: not in enabled drivers build config 00:05:10.790 compress/octeontx: not in enabled drivers build config 00:05:10.790 compress/zlib: not in enabled drivers build config 00:05:10.790 regex/*: missing internal dependency, "regexdev" 00:05:10.790 ml/*: missing internal dependency, "mldev" 00:05:10.790 vdpa/ifc: not in enabled drivers build config 00:05:10.790 vdpa/mlx5: not in enabled drivers build config 00:05:10.790 vdpa/nfp: not in enabled drivers build config 00:05:10.790 vdpa/sfc: not in enabled drivers build config 00:05:10.790 event/*: missing internal dependency, "eventdev" 00:05:10.790 baseband/*: missing internal dependency, "bbdev" 00:05:10.790 gpu/*: missing internal dependency, "gpudev" 00:05:10.790 00:05:10.790 00:05:10.790 Build targets in project: 85 00:05:10.790 00:05:10.790 DPDK 23.11.0 00:05:10.790 00:05:10.790 User defined options 00:05:10.790 buildtype : debug 00:05:10.790 default_library : static 00:05:10.790 libdir : lib 00:05:10.790 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:05:10.790 c_args : -fPIC -Werror 00:05:10.790 c_link_args : 00:05:10.790 cpu_instruction_set: native 00:05:10.790 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:10.790 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:10.790 enable_docs : false 00:05:10.790 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:10.790 enable_kmods : false 00:05:10.790 tests : false 00:05:10.790 00:05:10.790 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:10.790 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:05:11.053 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:11.053 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:11.053 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:11.053 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:11.053 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:11.053 [6/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:11.053 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:11.053 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:11.053 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:11.053 [10/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:11.053 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:11.054 [12/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:11.054 [13/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:11.054 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:11.054 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:11.054 [16/265] Linking static target lib/librte_kvargs.a 00:05:11.054 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:11.054 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:11.054 [19/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:11.054 [20/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:11.054 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:11.054 [22/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:11.054 [23/265] Linking static target lib/librte_log.a 00:05:11.054 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:11.054 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:11.314 [26/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.573 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:11.573 [28/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:11.573 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:11.573 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:11.573 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:11.573 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:11.573 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:11.573 [34/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:11.573 [35/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:11.573 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:11.573 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:11.573 [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:11.573 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:11.573 [40/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:11.573 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:11.573 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:11.573 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:11.573 [44/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:11.573 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:11.573 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:11.573 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:11.573 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:11.573 [49/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:11.573 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:11.573 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:11.573 [52/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:11.573 [53/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:11.573 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:11.573 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:11.573 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:11.573 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:11.573 [58/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:11.573 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:11.573 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:11.573 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:11.573 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:11.573 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:11.573 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:11.573 [65/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:11.573 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:11.573 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:11.573 [68/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:11.573 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:11.573 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:11.573 [71/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:11.573 [72/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:11.573 [73/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:11.573 [74/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:11.574 [75/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:11.574 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:11.574 [77/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:11.574 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:11.574 [79/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:11.574 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:11.574 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:11.574 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:11.574 [83/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:11.574 [84/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:11.574 [85/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:11.574 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:11.574 [87/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:11.574 [88/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.574 [89/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:11.574 [90/265] Linking static target lib/librte_pci.a 00:05:11.574 [91/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:11.574 [92/265] Linking static target lib/librte_ring.a 00:05:11.574 [93/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:11.574 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:11.574 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:11.574 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:11.574 [97/265] Linking static target lib/librte_meter.a 00:05:11.574 [98/265] Linking static target lib/librte_telemetry.a 00:05:11.574 [99/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:11.574 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:11.574 [101/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:11.574 [102/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:11.574 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:11.574 [104/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:11.574 [105/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:11.574 [106/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:11.574 [107/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:11.574 [108/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:11.574 [109/265] Linking static target lib/librte_eal.a 00:05:11.574 [110/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:11.574 [111/265] Linking target lib/librte_log.so.24.0 00:05:11.832 [112/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:11.832 [113/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:11.832 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:11.832 [115/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:11.832 [116/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:11.832 [117/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:11.832 [118/265] Linking static target lib/librte_rcu.a 00:05:11.832 [119/265] Linking static target lib/librte_net.a 00:05:11.832 [120/265] Linking static target lib/librte_mempool.a 00:05:11.832 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:11.832 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:11.832 [123/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:11.832 [124/265] Linking static target lib/librte_mbuf.a 00:05:11.832 [125/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.832 [126/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:05:11.832 [127/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.832 [128/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.832 [129/265] Linking target lib/librte_kvargs.so.24.0 00:05:11.832 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:12.092 [131/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.092 [132/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.092 [133/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:12.092 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:12.092 [135/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:12.092 [136/265] Linking static target lib/librte_timer.a 00:05:12.092 [137/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.092 [138/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:12.092 [139/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:12.092 [140/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:12.092 [141/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:12.092 [142/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:05:12.092 [143/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:12.092 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:12.092 [145/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:12.092 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:12.092 [147/265] Linking target lib/librte_telemetry.so.24.0 00:05:12.092 [148/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:12.092 [149/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:12.092 [150/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:12.092 [151/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:12.092 [152/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:12.092 [153/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:12.092 [154/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:12.092 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:12.092 [156/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:12.092 [157/265] Linking static target lib/librte_cmdline.a 00:05:12.092 [158/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:12.092 [159/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:12.092 [160/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:12.092 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:12.092 [162/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:12.092 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:12.092 [164/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:12.092 [165/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:12.092 [166/265] Linking static target lib/librte_compressdev.a 00:05:12.092 [167/265] Linking static target lib/librte_dmadev.a 00:05:12.092 [168/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:12.092 [169/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:12.352 [170/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:12.352 [171/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:12.352 [172/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:12.353 [173/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:12.353 [174/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:12.353 [175/265] Linking static target lib/librte_hash.a 00:05:12.353 [176/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:12.353 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:12.353 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:12.353 [179/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:05:12.353 [180/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:12.353 [181/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:12.353 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:12.353 [183/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:12.353 [184/265] Linking static target lib/librte_power.a 00:05:12.353 [185/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:12.353 [186/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:12.353 [187/265] Linking static target lib/librte_reorder.a 00:05:12.353 [188/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:12.353 [189/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:12.353 [190/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:12.353 [191/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:12.353 [192/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:12.353 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:12.353 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:12.353 [195/265] Linking static target lib/librte_security.a 00:05:12.353 [196/265] Linking static target drivers/librte_bus_vdev.a 00:05:12.353 [197/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:12.353 [198/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:12.353 [199/265] Linking static target lib/librte_cryptodev.a 00:05:12.353 [200/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:12.353 [201/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:12.353 [202/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.353 [203/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.626 [204/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.626 [205/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:12.626 [206/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:12.626 [207/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:12.626 [208/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:12.626 [209/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:12.626 [210/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:12.626 [211/265] Linking static target drivers/librte_mempool_ring.a 00:05:12.626 [212/265] Linking static target drivers/librte_bus_pci.a 00:05:12.626 [213/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:12.626 [214/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.626 [215/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.626 [216/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.898 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:12.898 [218/265] Linking static target lib/librte_ethdev.a 00:05:12.898 [219/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.898 [220/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.157 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.157 [222/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.415 [223/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:13.415 [224/265] Linking static target lib/librte_vhost.a 00:05:13.415 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.415 [226/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.794 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.363 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.052 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.989 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.989 [231/265] Linking target lib/librte_eal.so.24.0 00:05:23.248 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:05:23.248 [233/265] Linking target lib/librte_ring.so.24.0 00:05:23.248 [234/265] Linking target lib/librte_meter.so.24.0 00:05:23.248 [235/265] Linking target lib/librte_timer.so.24.0 00:05:23.248 [236/265] Linking target lib/librte_dmadev.so.24.0 00:05:23.248 [237/265] Linking target lib/librte_pci.so.24.0 00:05:23.248 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:05:23.507 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:05:23.507 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:05:23.507 [241/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:05:23.507 [242/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:05:23.507 [243/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:05:23.507 [244/265] Linking target lib/librte_rcu.so.24.0 00:05:23.508 [245/265] Linking target lib/librte_mempool.so.24.0 00:05:23.508 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:05:23.508 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:05:23.766 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:05:23.766 [249/265] Linking target lib/librte_mbuf.so.24.0 00:05:23.766 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:05:23.766 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:05:23.767 [252/265] Linking target lib/librte_net.so.24.0 00:05:24.026 [253/265] Linking target lib/librte_compressdev.so.24.0 00:05:24.026 [254/265] Linking target lib/librte_reorder.so.24.0 00:05:24.026 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:05:24.026 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:05:24.026 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:05:24.026 [258/265] Linking target lib/librte_hash.so.24.0 00:05:24.026 [259/265] Linking target lib/librte_security.so.24.0 00:05:24.026 [260/265] Linking target lib/librte_ethdev.so.24.0 00:05:24.026 [261/265] Linking target lib/librte_cmdline.so.24.0 00:05:24.286 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:05:24.286 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:05:24.286 [264/265] Linking target lib/librte_power.so.24.0 00:05:24.286 [265/265] Linking target lib/librte_vhost.so.24.0 00:05:24.286 INFO: autodetecting backend as ninja 00:05:24.286 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:05:25.221 CC lib/log/log.o 00:05:25.221 CC lib/log/log_deprecated.o 00:05:25.221 CC lib/log/log_flags.o 00:05:25.221 CC lib/ut_mock/mock.o 00:05:25.221 CC lib/ut/ut.o 00:05:25.221 LIB libspdk_ut_mock.a 00:05:25.221 LIB libspdk_log.a 00:05:25.480 LIB libspdk_ut.a 00:05:25.738 CXX lib/trace_parser/trace.o 00:05:25.739 CC lib/util/base64.o 00:05:25.739 CC lib/util/bit_array.o 00:05:25.739 CC lib/ioat/ioat.o 00:05:25.739 CC lib/util/cpuset.o 00:05:25.739 CC lib/util/crc16.o 00:05:25.739 CC lib/util/crc32.o 00:05:25.739 CC lib/util/crc32c.o 00:05:25.739 CC lib/util/crc32_ieee.o 00:05:25.739 CC lib/util/crc64.o 00:05:25.739 CC lib/util/dif.o 00:05:25.739 CC lib/util/fd.o 00:05:25.739 CC lib/dma/dma.o 00:05:25.739 CC lib/util/file.o 00:05:25.739 CC lib/util/hexlify.o 00:05:25.739 CC lib/util/iov.o 00:05:25.739 CC lib/util/math.o 00:05:25.739 CC lib/util/strerror_tls.o 00:05:25.739 CC lib/util/pipe.o 00:05:25.739 CC lib/util/string.o 00:05:25.739 CC lib/util/uuid.o 00:05:25.739 CC lib/util/fd_group.o 00:05:25.739 CC lib/util/xor.o 00:05:25.739 CC lib/util/zipf.o 00:05:25.739 CC lib/vfio_user/host/vfio_user.o 00:05:25.739 CC lib/vfio_user/host/vfio_user_pci.o 00:05:25.739 LIB libspdk_dma.a 00:05:25.739 LIB libspdk_ioat.a 00:05:25.998 LIB libspdk_vfio_user.a 00:05:25.998 LIB libspdk_util.a 00:05:26.257 LIB libspdk_trace_parser.a 00:05:26.257 CC lib/json/json_parse.o 00:05:26.257 CC lib/json/json_write.o 00:05:26.257 CC lib/json/json_util.o 00:05:26.257 CC lib/vmd/vmd.o 00:05:26.257 CC lib/vmd/led.o 00:05:26.257 CC lib/idxd/idxd.o 00:05:26.257 CC lib/idxd/idxd_user.o 00:05:26.257 CC lib/rdma/common.o 00:05:26.257 CC lib/conf/conf.o 00:05:26.257 CC lib/rdma/rdma_verbs.o 00:05:26.257 CC lib/env_dpdk/memory.o 00:05:26.257 CC lib/env_dpdk/env.o 00:05:26.257 CC lib/env_dpdk/pci.o 00:05:26.257 CC lib/env_dpdk/init.o 00:05:26.257 CC lib/env_dpdk/threads.o 00:05:26.257 CC lib/env_dpdk/pci_ioat.o 00:05:26.257 CC lib/env_dpdk/pci_virtio.o 00:05:26.257 CC lib/env_dpdk/pci_vmd.o 00:05:26.257 CC lib/env_dpdk/sigbus_handler.o 00:05:26.257 CC lib/env_dpdk/pci_idxd.o 00:05:26.257 CC lib/env_dpdk/pci_event.o 00:05:26.257 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:26.257 CC lib/env_dpdk/pci_dpdk.o 00:05:26.257 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:26.515 LIB libspdk_conf.a 00:05:26.515 LIB libspdk_json.a 00:05:26.515 LIB libspdk_rdma.a 00:05:26.515 LIB libspdk_idxd.a 00:05:26.775 LIB libspdk_vmd.a 00:05:26.775 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:26.775 CC lib/jsonrpc/jsonrpc_server.o 00:05:26.775 CC lib/jsonrpc/jsonrpc_client.o 00:05:26.775 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:27.034 LIB libspdk_jsonrpc.a 00:05:27.294 LIB libspdk_env_dpdk.a 00:05:27.294 CC lib/rpc/rpc.o 00:05:27.294 LIB libspdk_rpc.a 00:05:27.553 CC lib/notify/notify.o 00:05:27.553 CC lib/notify/notify_rpc.o 00:05:27.553 CC lib/trace/trace.o 00:05:27.553 CC lib/trace/trace_flags.o 00:05:27.553 CC lib/trace/trace_rpc.o 00:05:27.813 CC lib/sock/sock_rpc.o 00:05:27.813 CC lib/sock/sock.o 00:05:27.813 LIB libspdk_notify.a 00:05:27.813 LIB libspdk_trace.a 00:05:28.073 LIB libspdk_sock.a 00:05:28.073 CC lib/thread/thread.o 00:05:28.073 CC lib/thread/iobuf.o 00:05:28.331 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:28.331 CC lib/nvme/nvme_ctrlr.o 00:05:28.331 CC lib/nvme/nvme_fabric.o 00:05:28.331 CC lib/nvme/nvme_ns_cmd.o 00:05:28.331 CC lib/nvme/nvme_ns.o 00:05:28.331 CC lib/nvme/nvme_pcie_common.o 00:05:28.331 CC lib/nvme/nvme_pcie.o 00:05:28.331 CC lib/nvme/nvme_qpair.o 00:05:28.331 CC lib/nvme/nvme_quirks.o 00:05:28.331 CC lib/nvme/nvme.o 00:05:28.331 CC lib/nvme/nvme_transport.o 00:05:28.331 CC lib/nvme/nvme_discovery.o 00:05:28.331 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:28.331 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:28.331 CC lib/nvme/nvme_tcp.o 00:05:28.331 CC lib/nvme/nvme_opal.o 00:05:28.331 CC lib/nvme/nvme_io_msg.o 00:05:28.331 CC lib/nvme/nvme_poll_group.o 00:05:28.331 CC lib/nvme/nvme_zns.o 00:05:28.331 CC lib/nvme/nvme_cuse.o 00:05:28.331 CC lib/nvme/nvme_rdma.o 00:05:28.331 CC lib/nvme/nvme_vfio_user.o 00:05:28.898 LIB libspdk_thread.a 00:05:29.157 CC lib/init/json_config.o 00:05:29.157 CC lib/init/subsystem.o 00:05:29.157 CC lib/init/subsystem_rpc.o 00:05:29.157 CC lib/init/rpc.o 00:05:29.157 CC lib/virtio/virtio.o 00:05:29.157 CC lib/virtio/virtio_vhost_user.o 00:05:29.157 CC lib/virtio/virtio_vfio_user.o 00:05:29.157 CC lib/virtio/virtio_pci.o 00:05:29.157 CC lib/accel/accel.o 00:05:29.157 CC lib/vfu_tgt/tgt_endpoint.o 00:05:29.157 CC lib/accel/accel_sw.o 00:05:29.157 CC lib/accel/accel_rpc.o 00:05:29.157 CC lib/vfu_tgt/tgt_rpc.o 00:05:29.416 CC lib/blob/blobstore.o 00:05:29.416 CC lib/blob/request.o 00:05:29.416 CC lib/blob/zeroes.o 00:05:29.416 CC lib/blob/blob_bs_dev.o 00:05:29.416 LIB libspdk_init.a 00:05:29.416 LIB libspdk_virtio.a 00:05:29.416 LIB libspdk_vfu_tgt.a 00:05:29.675 LIB libspdk_nvme.a 00:05:29.675 CC lib/event/app.o 00:05:29.675 CC lib/event/log_rpc.o 00:05:29.675 CC lib/event/reactor.o 00:05:29.675 CC lib/event/app_rpc.o 00:05:29.675 CC lib/event/scheduler_static.o 00:05:29.933 LIB libspdk_event.a 00:05:29.933 LIB libspdk_accel.a 00:05:30.192 CC lib/bdev/bdev.o 00:05:30.192 CC lib/bdev/bdev_rpc.o 00:05:30.192 CC lib/bdev/bdev_zone.o 00:05:30.192 CC lib/bdev/part.o 00:05:30.192 CC lib/bdev/scsi_nvme.o 00:05:31.128 LIB libspdk_blob.a 00:05:31.128 CC lib/blobfs/blobfs.o 00:05:31.128 CC lib/blobfs/tree.o 00:05:31.386 CC lib/lvol/lvol.o 00:05:31.644 LIB libspdk_lvol.a 00:05:31.644 LIB libspdk_blobfs.a 00:05:32.210 LIB libspdk_bdev.a 00:05:32.474 CC lib/scsi/dev.o 00:05:32.474 CC lib/scsi/port.o 00:05:32.474 CC lib/scsi/lun.o 00:05:32.474 CC lib/scsi/scsi.o 00:05:32.474 CC lib/scsi/scsi_bdev.o 00:05:32.474 CC lib/scsi/scsi_pr.o 00:05:32.474 CC lib/scsi/scsi_rpc.o 00:05:32.474 CC lib/scsi/task.o 00:05:32.474 CC lib/nbd/nbd.o 00:05:32.474 CC lib/nbd/nbd_rpc.o 00:05:32.474 CC lib/nvmf/ctrlr.o 00:05:32.474 CC lib/ublk/ublk_rpc.o 00:05:32.474 CC lib/nvmf/ctrlr_bdev.o 00:05:32.474 CC lib/nvmf/ctrlr_discovery.o 00:05:32.474 CC lib/ublk/ublk.o 00:05:32.474 CC lib/nvmf/transport.o 00:05:32.474 CC lib/nvmf/subsystem.o 00:05:32.474 CC lib/nvmf/nvmf.o 00:05:32.474 CC lib/nvmf/nvmf_rpc.o 00:05:32.474 CC lib/nvmf/vfio_user.o 00:05:32.474 CC lib/nvmf/tcp.o 00:05:32.474 CC lib/nvmf/rdma.o 00:05:32.474 CC lib/ftl/ftl_core.o 00:05:32.474 CC lib/ftl/ftl_init.o 00:05:32.474 CC lib/ftl/ftl_io.o 00:05:32.474 CC lib/ftl/ftl_layout.o 00:05:32.474 CC lib/ftl/ftl_debug.o 00:05:32.474 CC lib/ftl/ftl_sb.o 00:05:32.474 CC lib/ftl/ftl_nv_cache.o 00:05:32.474 CC lib/ftl/ftl_l2p.o 00:05:32.474 CC lib/ftl/ftl_l2p_flat.o 00:05:32.474 CC lib/ftl/ftl_band.o 00:05:32.474 CC lib/ftl/ftl_band_ops.o 00:05:32.474 CC lib/ftl/ftl_writer.o 00:05:32.474 CC lib/ftl/ftl_reloc.o 00:05:32.474 CC lib/ftl/ftl_rq.o 00:05:32.474 CC lib/ftl/ftl_p2l.o 00:05:32.474 CC lib/ftl/mngt/ftl_mngt.o 00:05:32.474 CC lib/ftl/ftl_l2p_cache.o 00:05:32.474 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:32.474 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:32.474 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:32.474 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:32.474 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:32.474 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:32.474 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:32.474 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:32.474 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:32.474 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:32.474 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:32.474 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:32.474 CC lib/ftl/utils/ftl_conf.o 00:05:32.474 CC lib/ftl/utils/ftl_md.o 00:05:32.474 CC lib/ftl/utils/ftl_mempool.o 00:05:32.474 CC lib/ftl/utils/ftl_bitmap.o 00:05:32.474 CC lib/ftl/utils/ftl_property.o 00:05:32.474 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:32.474 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:32.474 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:32.474 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:32.474 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:32.474 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:32.474 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:32.474 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:32.474 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:32.474 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:32.474 CC lib/ftl/base/ftl_base_dev.o 00:05:32.474 CC lib/ftl/base/ftl_base_bdev.o 00:05:32.474 CC lib/ftl/ftl_trace.o 00:05:32.731 LIB libspdk_nbd.a 00:05:32.991 LIB libspdk_scsi.a 00:05:32.991 LIB libspdk_ublk.a 00:05:33.249 LIB libspdk_ftl.a 00:05:33.249 CC lib/vhost/vhost_scsi.o 00:05:33.249 CC lib/vhost/vhost.o 00:05:33.249 CC lib/vhost/vhost_rpc.o 00:05:33.249 CC lib/vhost/rte_vhost_user.o 00:05:33.249 CC lib/vhost/vhost_blk.o 00:05:33.249 CC lib/iscsi/conn.o 00:05:33.249 CC lib/iscsi/md5.o 00:05:33.249 CC lib/iscsi/init_grp.o 00:05:33.249 CC lib/iscsi/iscsi.o 00:05:33.249 CC lib/iscsi/param.o 00:05:33.249 CC lib/iscsi/portal_grp.o 00:05:33.249 CC lib/iscsi/tgt_node.o 00:05:33.249 CC lib/iscsi/iscsi_subsystem.o 00:05:33.249 CC lib/iscsi/iscsi_rpc.o 00:05:33.249 CC lib/iscsi/task.o 00:05:33.817 LIB libspdk_nvmf.a 00:05:33.817 LIB libspdk_vhost.a 00:05:34.076 LIB libspdk_iscsi.a 00:05:34.334 CC module/vfu_device/vfu_virtio.o 00:05:34.334 CC module/vfu_device/vfu_virtio_blk.o 00:05:34.334 CC module/vfu_device/vfu_virtio_scsi.o 00:05:34.334 CC module/vfu_device/vfu_virtio_rpc.o 00:05:34.592 CC module/env_dpdk/env_dpdk_rpc.o 00:05:34.592 CC module/scheduler/gscheduler/gscheduler.o 00:05:34.592 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:34.592 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:34.592 CC module/blob/bdev/blob_bdev.o 00:05:34.592 CC module/sock/posix/posix.o 00:05:34.592 CC module/accel/error/accel_error.o 00:05:34.592 CC module/accel/error/accel_error_rpc.o 00:05:34.592 CC module/accel/ioat/accel_ioat.o 00:05:34.592 CC module/accel/ioat/accel_ioat_rpc.o 00:05:34.592 CC module/accel/dsa/accel_dsa.o 00:05:34.592 CC module/accel/dsa/accel_dsa_rpc.o 00:05:34.592 LIB libspdk_env_dpdk_rpc.a 00:05:34.592 CC module/accel/iaa/accel_iaa.o 00:05:34.592 CC module/accel/iaa/accel_iaa_rpc.o 00:05:34.592 LIB libspdk_scheduler_gscheduler.a 00:05:34.592 LIB libspdk_scheduler_dpdk_governor.a 00:05:34.592 LIB libspdk_scheduler_dynamic.a 00:05:34.592 LIB libspdk_accel_error.a 00:05:34.849 LIB libspdk_accel_ioat.a 00:05:34.849 LIB libspdk_blob_bdev.a 00:05:34.849 LIB libspdk_accel_iaa.a 00:05:34.849 LIB libspdk_accel_dsa.a 00:05:34.849 LIB libspdk_vfu_device.a 00:05:35.107 LIB libspdk_sock_posix.a 00:05:35.107 CC module/bdev/malloc/bdev_malloc.o 00:05:35.107 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:35.107 CC module/bdev/passthru/vbdev_passthru.o 00:05:35.107 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:35.107 CC module/bdev/null/bdev_null_rpc.o 00:05:35.107 CC module/bdev/null/bdev_null.o 00:05:35.107 CC module/bdev/error/vbdev_error.o 00:05:35.107 CC module/bdev/lvol/vbdev_lvol.o 00:05:35.107 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:35.107 CC module/bdev/split/vbdev_split.o 00:05:35.107 CC module/bdev/error/vbdev_error_rpc.o 00:05:35.107 CC module/bdev/split/vbdev_split_rpc.o 00:05:35.107 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:35.107 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:35.107 CC module/bdev/gpt/gpt.o 00:05:35.108 CC module/bdev/gpt/vbdev_gpt.o 00:05:35.108 CC module/bdev/iscsi/bdev_iscsi.o 00:05:35.108 CC module/bdev/ftl/bdev_ftl.o 00:05:35.108 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:35.108 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:35.108 CC module/bdev/delay/vbdev_delay.o 00:05:35.108 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:35.108 CC module/bdev/aio/bdev_aio.o 00:05:35.108 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:35.108 CC module/bdev/aio/bdev_aio_rpc.o 00:05:35.108 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:35.108 CC module/blobfs/bdev/blobfs_bdev.o 00:05:35.108 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:35.108 CC module/bdev/nvme/bdev_nvme.o 00:05:35.108 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:35.108 CC module/bdev/nvme/nvme_rpc.o 00:05:35.108 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:35.108 CC module/bdev/nvme/vbdev_opal.o 00:05:35.108 CC module/bdev/nvme/bdev_mdns_client.o 00:05:35.108 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:35.108 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:35.108 CC module/bdev/raid/bdev_raid.o 00:05:35.108 CC module/bdev/raid/bdev_raid_rpc.o 00:05:35.108 CC module/bdev/raid/raid0.o 00:05:35.108 CC module/bdev/raid/bdev_raid_sb.o 00:05:35.108 CC module/bdev/raid/raid1.o 00:05:35.108 CC module/bdev/raid/concat.o 00:05:35.366 LIB libspdk_bdev_null.a 00:05:35.366 LIB libspdk_bdev_error.a 00:05:35.366 LIB libspdk_bdev_gpt.a 00:05:35.366 LIB libspdk_bdev_split.a 00:05:35.366 LIB libspdk_bdev_passthru.a 00:05:35.366 LIB libspdk_bdev_ftl.a 00:05:35.366 LIB libspdk_blobfs_bdev.a 00:05:35.366 LIB libspdk_bdev_aio.a 00:05:35.366 LIB libspdk_bdev_malloc.a 00:05:35.366 LIB libspdk_bdev_zone_block.a 00:05:35.366 LIB libspdk_bdev_delay.a 00:05:35.625 LIB libspdk_bdev_lvol.a 00:05:35.625 LIB libspdk_bdev_iscsi.a 00:05:35.625 LIB libspdk_bdev_virtio.a 00:05:35.625 LIB libspdk_bdev_raid.a 00:05:36.563 LIB libspdk_bdev_nvme.a 00:05:37.131 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:37.131 CC module/event/subsystems/scheduler/scheduler.o 00:05:37.131 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:37.131 CC module/event/subsystems/sock/sock.o 00:05:37.131 CC module/event/subsystems/vmd/vmd.o 00:05:37.131 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:37.131 CC module/event/subsystems/iobuf/iobuf.o 00:05:37.131 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:37.131 LIB libspdk_event_vhost_blk.a 00:05:37.131 LIB libspdk_event_scheduler.a 00:05:37.131 LIB libspdk_event_sock.a 00:05:37.131 LIB libspdk_event_vfu_tgt.a 00:05:37.131 LIB libspdk_event_vmd.a 00:05:37.131 LIB libspdk_event_iobuf.a 00:05:37.390 CC module/event/subsystems/accel/accel.o 00:05:37.650 LIB libspdk_event_accel.a 00:05:37.909 CC module/event/subsystems/bdev/bdev.o 00:05:37.909 LIB libspdk_event_bdev.a 00:05:38.170 CC module/event/subsystems/nbd/nbd.o 00:05:38.429 CC module/event/subsystems/scsi/scsi.o 00:05:38.429 CC module/event/subsystems/ublk/ublk.o 00:05:38.429 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:38.429 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:38.429 LIB libspdk_event_nbd.a 00:05:38.429 LIB libspdk_event_ublk.a 00:05:38.429 LIB libspdk_event_scsi.a 00:05:38.429 LIB libspdk_event_nvmf.a 00:05:38.689 CC module/event/subsystems/iscsi/iscsi.o 00:05:38.689 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:38.948 LIB libspdk_event_vhost_scsi.a 00:05:38.948 LIB libspdk_event_iscsi.a 00:05:39.209 CC app/trace_record/trace_record.o 00:05:39.209 CC app/spdk_top/spdk_top.o 00:05:39.209 CXX app/trace/trace.o 00:05:39.209 CC app/spdk_lspci/spdk_lspci.o 00:05:39.209 CC app/spdk_nvme_identify/identify.o 00:05:39.209 CC app/spdk_nvme_perf/perf.o 00:05:39.209 CC app/spdk_nvme_discover/discovery_aer.o 00:05:39.209 TEST_HEADER include/spdk/accel.h 00:05:39.209 TEST_HEADER include/spdk/accel_module.h 00:05:39.209 TEST_HEADER include/spdk/assert.h 00:05:39.209 TEST_HEADER include/spdk/barrier.h 00:05:39.209 CC test/rpc_client/rpc_client_test.o 00:05:39.209 TEST_HEADER include/spdk/base64.h 00:05:39.209 TEST_HEADER include/spdk/bdev.h 00:05:39.209 TEST_HEADER include/spdk/bdev_module.h 00:05:39.209 CC app/spdk_dd/spdk_dd.o 00:05:39.209 TEST_HEADER include/spdk/bdev_zone.h 00:05:39.209 TEST_HEADER include/spdk/bit_array.h 00:05:39.209 TEST_HEADER include/spdk/bit_pool.h 00:05:39.209 TEST_HEADER include/spdk/blob_bdev.h 00:05:39.209 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:39.209 TEST_HEADER include/spdk/blobfs.h 00:05:39.209 CC app/iscsi_tgt/iscsi_tgt.o 00:05:39.209 TEST_HEADER include/spdk/blob.h 00:05:39.209 TEST_HEADER include/spdk/conf.h 00:05:39.209 TEST_HEADER include/spdk/config.h 00:05:39.209 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:39.209 TEST_HEADER include/spdk/cpuset.h 00:05:39.209 CC app/vhost/vhost.o 00:05:39.209 CC app/nvmf_tgt/nvmf_main.o 00:05:39.209 TEST_HEADER include/spdk/crc16.h 00:05:39.209 TEST_HEADER include/spdk/crc32.h 00:05:39.209 TEST_HEADER include/spdk/crc64.h 00:05:39.209 CC app/spdk_tgt/spdk_tgt.o 00:05:39.209 TEST_HEADER include/spdk/dif.h 00:05:39.209 TEST_HEADER include/spdk/dma.h 00:05:39.209 TEST_HEADER include/spdk/endian.h 00:05:39.209 TEST_HEADER include/spdk/env_dpdk.h 00:05:39.209 TEST_HEADER include/spdk/env.h 00:05:39.209 TEST_HEADER include/spdk/event.h 00:05:39.209 TEST_HEADER include/spdk/fd_group.h 00:05:39.209 CC app/fio/nvme/fio_plugin.o 00:05:39.209 TEST_HEADER include/spdk/fd.h 00:05:39.209 TEST_HEADER include/spdk/file.h 00:05:39.209 TEST_HEADER include/spdk/ftl.h 00:05:39.209 TEST_HEADER include/spdk/gpt_spec.h 00:05:39.209 CC examples/vmd/lsvmd/lsvmd.o 00:05:39.209 TEST_HEADER include/spdk/hexlify.h 00:05:39.209 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:39.209 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:39.209 CC test/nvme/e2edp/nvme_dp.o 00:05:39.209 CC test/nvme/reset/reset.o 00:05:39.209 CC test/nvme/sgl/sgl.o 00:05:39.209 CC examples/nvme/abort/abort.o 00:05:39.209 TEST_HEADER include/spdk/histogram_data.h 00:05:39.209 CC test/nvme/connect_stress/connect_stress.o 00:05:39.209 CC test/nvme/simple_copy/simple_copy.o 00:05:39.209 CC test/nvme/compliance/nvme_compliance.o 00:05:39.209 CC examples/ioat/verify/verify.o 00:05:39.209 CC test/nvme/overhead/overhead.o 00:05:39.209 CC examples/vmd/led/led.o 00:05:39.209 TEST_HEADER include/spdk/idxd.h 00:05:39.209 CC test/app/stub/stub.o 00:05:39.209 CC examples/ioat/perf/perf.o 00:05:39.209 CC test/event/reactor/reactor.o 00:05:39.209 CC examples/idxd/perf/perf.o 00:05:39.210 CC test/event/reactor_perf/reactor_perf.o 00:05:39.210 CC examples/sock/hello_world/hello_sock.o 00:05:39.210 TEST_HEADER include/spdk/idxd_spec.h 00:05:39.210 CC examples/nvme/arbitration/arbitration.o 00:05:39.210 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:39.210 CC test/nvme/fdp/fdp.o 00:05:39.210 CC test/nvme/boot_partition/boot_partition.o 00:05:39.210 CC examples/accel/perf/accel_perf.o 00:05:39.210 CC test/nvme/startup/startup.o 00:05:39.210 CC examples/nvme/hotplug/hotplug.o 00:05:39.210 CC examples/nvme/reconnect/reconnect.o 00:05:39.210 CC test/app/histogram_perf/histogram_perf.o 00:05:39.210 CC test/nvme/aer/aer.o 00:05:39.210 CC test/event/event_perf/event_perf.o 00:05:39.210 TEST_HEADER include/spdk/init.h 00:05:39.210 CC test/nvme/fused_ordering/fused_ordering.o 00:05:39.210 CC test/nvme/reserve/reserve.o 00:05:39.210 CC examples/util/zipf/zipf.o 00:05:39.210 TEST_HEADER include/spdk/ioat.h 00:05:39.210 CC examples/nvme/hello_world/hello_world.o 00:05:39.210 CC test/nvme/err_injection/err_injection.o 00:05:39.210 TEST_HEADER include/spdk/ioat_spec.h 00:05:39.210 CC test/nvme/cuse/cuse.o 00:05:39.210 CC test/thread/poller_perf/poller_perf.o 00:05:39.210 CC test/app/jsoncat/jsoncat.o 00:05:39.472 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:39.472 TEST_HEADER include/spdk/iscsi_spec.h 00:05:39.472 CC examples/bdev/bdevperf/bdevperf.o 00:05:39.472 TEST_HEADER include/spdk/json.h 00:05:39.472 TEST_HEADER include/spdk/jsonrpc.h 00:05:39.472 TEST_HEADER include/spdk/likely.h 00:05:39.473 CC app/fio/bdev/fio_plugin.o 00:05:39.473 LINK spdk_lspci 00:05:39.473 CC test/event/app_repeat/app_repeat.o 00:05:39.473 TEST_HEADER include/spdk/log.h 00:05:39.473 TEST_HEADER include/spdk/lvol.h 00:05:39.473 CC examples/blob/hello_world/hello_blob.o 00:05:39.473 CC examples/bdev/hello_world/hello_bdev.o 00:05:39.473 TEST_HEADER include/spdk/memory.h 00:05:39.473 CC examples/blob/cli/blobcli.o 00:05:39.473 CC examples/thread/thread/thread_ex.o 00:05:39.473 TEST_HEADER include/spdk/mmio.h 00:05:39.473 TEST_HEADER include/spdk/nbd.h 00:05:39.473 CC test/app/bdev_svc/bdev_svc.o 00:05:39.473 TEST_HEADER include/spdk/notify.h 00:05:39.473 CC test/dma/test_dma/test_dma.o 00:05:39.473 TEST_HEADER include/spdk/nvme.h 00:05:39.473 TEST_HEADER include/spdk/nvme_intel.h 00:05:39.473 CC examples/nvmf/nvmf/nvmf.o 00:05:39.473 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:39.473 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:39.473 CC test/accel/dif/dif.o 00:05:39.473 TEST_HEADER include/spdk/nvme_spec.h 00:05:39.473 TEST_HEADER include/spdk/nvme_zns.h 00:05:39.473 CC test/blobfs/mkfs/mkfs.o 00:05:39.473 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:39.473 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:39.473 CC test/event/scheduler/scheduler.o 00:05:39.473 CC test/bdev/bdevio/bdevio.o 00:05:39.473 TEST_HEADER include/spdk/nvmf.h 00:05:39.473 TEST_HEADER include/spdk/nvmf_spec.h 00:05:39.473 TEST_HEADER include/spdk/nvmf_transport.h 00:05:39.473 TEST_HEADER include/spdk/opal.h 00:05:39.473 TEST_HEADER include/spdk/opal_spec.h 00:05:39.473 TEST_HEADER include/spdk/pci_ids.h 00:05:39.473 TEST_HEADER include/spdk/pipe.h 00:05:39.473 TEST_HEADER include/spdk/queue.h 00:05:39.473 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:39.473 TEST_HEADER include/spdk/reduce.h 00:05:39.473 LINK rpc_client_test 00:05:39.473 TEST_HEADER include/spdk/rpc.h 00:05:39.473 LINK spdk_nvme_discover 00:05:39.473 TEST_HEADER include/spdk/scheduler.h 00:05:39.473 CC test/lvol/esnap/esnap.o 00:05:39.473 TEST_HEADER include/spdk/scsi.h 00:05:39.473 TEST_HEADER include/spdk/scsi_spec.h 00:05:39.473 CC test/env/mem_callbacks/mem_callbacks.o 00:05:39.473 TEST_HEADER include/spdk/sock.h 00:05:39.473 TEST_HEADER include/spdk/stdinc.h 00:05:39.473 TEST_HEADER include/spdk/string.h 00:05:39.473 TEST_HEADER include/spdk/thread.h 00:05:39.473 TEST_HEADER include/spdk/trace.h 00:05:39.473 TEST_HEADER include/spdk/trace_parser.h 00:05:39.473 TEST_HEADER include/spdk/tree.h 00:05:39.473 TEST_HEADER include/spdk/ublk.h 00:05:39.473 TEST_HEADER include/spdk/util.h 00:05:39.473 LINK spdk_trace_record 00:05:39.473 TEST_HEADER include/spdk/uuid.h 00:05:39.473 TEST_HEADER include/spdk/version.h 00:05:39.473 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:39.473 LINK interrupt_tgt 00:05:39.473 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:39.473 TEST_HEADER include/spdk/vhost.h 00:05:39.473 LINK lsvmd 00:05:39.473 TEST_HEADER include/spdk/vmd.h 00:05:39.473 TEST_HEADER include/spdk/xor.h 00:05:39.473 TEST_HEADER include/spdk/zipf.h 00:05:39.473 CXX test/cpp_headers/accel.o 00:05:39.473 LINK led 00:05:39.473 LINK reactor_perf 00:05:39.473 LINK nvmf_tgt 00:05:39.473 LINK reactor 00:05:39.473 LINK jsoncat 00:05:39.473 LINK iscsi_tgt 00:05:39.473 LINK histogram_perf 00:05:39.473 LINK vhost 00:05:39.473 LINK event_perf 00:05:39.473 LINK poller_perf 00:05:39.473 LINK zipf 00:05:39.473 fio_plugin.c:1491:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:05:39.473 struct spdk_nvme_fdp_ruhs ruhs; 00:05:39.473 ^ 00:05:39.473 LINK pmr_persistence 00:05:39.473 LINK boot_partition 00:05:39.473 LINK connect_stress 00:05:39.473 LINK app_repeat 00:05:39.473 LINK doorbell_aers 00:05:39.473 LINK startup 00:05:39.473 LINK spdk_tgt 00:05:39.473 LINK stub 00:05:39.473 LINK err_injection 00:05:39.473 LINK cmb_copy 00:05:39.473 LINK verify 00:05:39.473 LINK reserve 00:05:39.734 LINK ioat_perf 00:05:39.734 LINK fused_ordering 00:05:39.734 LINK simple_copy 00:05:39.734 LINK hello_world 00:05:39.734 LINK bdev_svc 00:05:39.734 LINK hotplug 00:05:39.734 LINK hello_sock 00:05:39.734 LINK spdk_trace 00:05:39.734 LINK nvme_dp 00:05:39.734 LINK hello_blob 00:05:39.734 LINK reset 00:05:39.734 LINK thread 00:05:39.734 LINK mkfs 00:05:39.734 LINK aer 00:05:39.734 LINK fdp 00:05:39.734 LINK hello_bdev 00:05:39.734 LINK sgl 00:05:39.734 LINK scheduler 00:05:39.734 CXX test/cpp_headers/accel_module.o 00:05:39.734 LINK overhead 00:05:39.734 LINK idxd_perf 00:05:39.734 LINK nvmf 00:05:39.734 LINK reconnect 00:05:39.734 LINK abort 00:05:39.734 LINK arbitration 00:05:39.734 LINK spdk_dd 00:05:40.017 LINK test_dma 00:05:40.017 LINK nvme_compliance 00:05:40.017 LINK nvme_manage 00:05:40.017 LINK accel_perf 00:05:40.017 LINK dif 00:05:40.017 LINK bdevio 00:05:40.017 LINK blobcli 00:05:40.017 1 warning generated. 00:05:40.017 LINK nvme_fuzz 00:05:40.017 CXX test/cpp_headers/assert.o 00:05:40.017 LINK spdk_nvme 00:05:40.017 CXX test/cpp_headers/barrier.o 00:05:40.017 LINK spdk_bdev 00:05:40.017 CXX test/cpp_headers/base64.o 00:05:40.017 LINK mem_callbacks 00:05:40.017 LINK spdk_nvme_identify 00:05:40.343 LINK spdk_nvme_perf 00:05:40.343 CC test/thread/lock/spdk_lock.o 00:05:40.343 LINK bdevperf 00:05:40.343 CXX test/cpp_headers/bdev.o 00:05:40.343 LINK spdk_top 00:05:40.343 CC test/env/vtophys/vtophys.o 00:05:40.343 CXX test/cpp_headers/bdev_module.o 00:05:40.343 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:40.343 CXX test/cpp_headers/bdev_zone.o 00:05:40.343 CXX test/cpp_headers/bit_array.o 00:05:40.343 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:40.343 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:40.343 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:05:40.343 CXX test/cpp_headers/bit_pool.o 00:05:40.343 CC test/env/memory/memory_ut.o 00:05:40.624 CC test/env/pci/pci_ut.o 00:05:40.624 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:05:40.624 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:40.624 LINK vtophys 00:05:40.624 CXX test/cpp_headers/blob_bdev.o 00:05:40.624 CXX test/cpp_headers/blobfs_bdev.o 00:05:40.624 CXX test/cpp_headers/blobfs.o 00:05:40.624 LINK env_dpdk_post_init 00:05:40.624 CXX test/cpp_headers/blob.o 00:05:40.624 CXX test/cpp_headers/conf.o 00:05:40.624 LINK cuse 00:05:40.624 CXX test/cpp_headers/config.o 00:05:40.624 CXX test/cpp_headers/cpuset.o 00:05:40.624 CXX test/cpp_headers/crc16.o 00:05:40.624 CXX test/cpp_headers/crc32.o 00:05:40.624 CXX test/cpp_headers/crc64.o 00:05:40.886 CXX test/cpp_headers/dif.o 00:05:40.886 CXX test/cpp_headers/dma.o 00:05:40.886 CXX test/cpp_headers/endian.o 00:05:40.886 CXX test/cpp_headers/env_dpdk.o 00:05:40.886 CXX test/cpp_headers/env.o 00:05:40.886 CXX test/cpp_headers/event.o 00:05:40.886 CXX test/cpp_headers/fd_group.o 00:05:40.886 LINK llvm_vfio_fuzz 00:05:40.886 CXX test/cpp_headers/fd.o 00:05:40.886 CXX test/cpp_headers/file.o 00:05:40.886 CXX test/cpp_headers/ftl.o 00:05:40.886 CXX test/cpp_headers/gpt_spec.o 00:05:40.886 CXX test/cpp_headers/hexlify.o 00:05:40.886 CXX test/cpp_headers/histogram_data.o 00:05:40.886 CXX test/cpp_headers/idxd.o 00:05:40.886 CXX test/cpp_headers/idxd_spec.o 00:05:40.886 CXX test/cpp_headers/init.o 00:05:40.886 CXX test/cpp_headers/ioat.o 00:05:40.886 CXX test/cpp_headers/ioat_spec.o 00:05:40.886 CXX test/cpp_headers/iscsi_spec.o 00:05:40.886 CXX test/cpp_headers/json.o 00:05:40.886 CXX test/cpp_headers/jsonrpc.o 00:05:40.886 CXX test/cpp_headers/likely.o 00:05:40.886 CXX test/cpp_headers/log.o 00:05:40.886 CXX test/cpp_headers/lvol.o 00:05:40.886 CXX test/cpp_headers/memory.o 00:05:40.886 CXX test/cpp_headers/mmio.o 00:05:40.886 CXX test/cpp_headers/nbd.o 00:05:40.886 CXX test/cpp_headers/notify.o 00:05:40.886 CXX test/cpp_headers/nvme.o 00:05:41.146 CXX test/cpp_headers/nvme_intel.o 00:05:41.146 CXX test/cpp_headers/nvme_ocssd.o 00:05:41.146 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:41.146 CXX test/cpp_headers/nvme_spec.o 00:05:41.146 LINK pci_ut 00:05:41.146 CXX test/cpp_headers/nvme_zns.o 00:05:41.146 CXX test/cpp_headers/nvmf_cmd.o 00:05:41.146 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:41.146 CXX test/cpp_headers/nvmf.o 00:05:41.146 LINK vhost_fuzz 00:05:41.146 CXX test/cpp_headers/nvmf_spec.o 00:05:41.146 CXX test/cpp_headers/nvmf_transport.o 00:05:41.146 CXX test/cpp_headers/opal.o 00:05:41.146 CXX test/cpp_headers/opal_spec.o 00:05:41.146 CXX test/cpp_headers/pci_ids.o 00:05:41.146 CXX test/cpp_headers/pipe.o 00:05:41.146 CXX test/cpp_headers/queue.o 00:05:41.146 CXX test/cpp_headers/reduce.o 00:05:41.146 CXX test/cpp_headers/rpc.o 00:05:41.146 CXX test/cpp_headers/scheduler.o 00:05:41.146 CXX test/cpp_headers/scsi.o 00:05:41.146 CXX test/cpp_headers/scsi_spec.o 00:05:41.146 CXX test/cpp_headers/sock.o 00:05:41.146 CXX test/cpp_headers/stdinc.o 00:05:41.146 CXX test/cpp_headers/string.o 00:05:41.146 CXX test/cpp_headers/thread.o 00:05:41.146 CXX test/cpp_headers/trace.o 00:05:41.146 CXX test/cpp_headers/trace_parser.o 00:05:41.146 CXX test/cpp_headers/tree.o 00:05:41.146 CXX test/cpp_headers/ublk.o 00:05:41.146 CXX test/cpp_headers/util.o 00:05:41.146 CXX test/cpp_headers/uuid.o 00:05:41.146 CXX test/cpp_headers/version.o 00:05:41.146 CXX test/cpp_headers/vfio_user_pci.o 00:05:41.146 CXX test/cpp_headers/vfio_user_spec.o 00:05:41.146 CXX test/cpp_headers/vhost.o 00:05:41.146 CXX test/cpp_headers/vmd.o 00:05:41.147 CXX test/cpp_headers/xor.o 00:05:41.147 CXX test/cpp_headers/zipf.o 00:05:41.405 LINK llvm_nvme_fuzz 00:05:41.405 LINK memory_ut 00:05:41.663 LINK spdk_lock 00:05:41.922 LINK iscsi_fuzz 00:05:43.299 LINK esnap 00:05:43.867 00:05:43.867 real 0m41.216s 00:05:43.867 user 6m32.657s 00:05:43.867 sys 2m29.538s 00:05:43.867 10:00:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:05:43.867 10:00:56 -- common/autotest_common.sh@10 -- $ set +x 00:05:43.867 ************************************ 00:05:43.867 END TEST make 00:05:43.867 ************************************ 00:05:43.867 10:00:57 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.867 10:00:57 -- nvmf/common.sh@7 -- # uname -s 00:05:43.867 10:00:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.867 10:00:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.867 10:00:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.867 10:00:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.867 10:00:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.867 10:00:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.867 10:00:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.867 10:00:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.867 10:00:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.867 10:00:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.867 10:00:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:05:43.867 10:00:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:05:43.867 10:00:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.867 10:00:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.867 10:00:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.867 10:00:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:43.867 10:00:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.867 10:00:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.867 10:00:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.867 10:00:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.867 10:00:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.867 10:00:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.867 10:00:57 -- paths/export.sh@5 -- # export PATH 00:05:43.867 10:00:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.867 10:00:57 -- nvmf/common.sh@46 -- # : 0 00:05:43.867 10:00:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:43.867 10:00:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:43.867 10:00:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:43.867 10:00:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.867 10:00:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.867 10:00:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:43.867 10:00:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:43.867 10:00:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:43.867 10:00:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:43.867 10:00:57 -- spdk/autotest.sh@32 -- # uname -s 00:05:43.867 10:00:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:43.867 10:00:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:43.867 10:00:57 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:05:43.867 10:00:57 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:44.126 10:00:57 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:05:44.126 10:00:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:44.126 10:00:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:44.126 10:00:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:44.126 10:00:57 -- spdk/autotest.sh@48 -- # udevadm_pid=1100070 00:05:44.126 10:00:57 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:05:44.126 10:00:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:44.126 10:00:57 -- spdk/autotest.sh@54 -- # echo 1100072 00:05:44.126 10:00:57 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:05:44.126 10:00:57 -- spdk/autotest.sh@56 -- # echo 1100073 00:05:44.126 10:00:57 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:05:44.126 10:00:57 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:05:44.126 10:00:57 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l 00:05:44.126 10:00:57 -- spdk/autotest.sh@60 -- # echo 1100074 00:05:44.126 10:00:57 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l 00:05:44.126 10:00:57 -- spdk/autotest.sh@62 -- # echo 1100075 00:05:44.126 10:00:57 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:44.126 10:00:57 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:05:44.126 10:00:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:44.126 10:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.126 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:05:44.126 10:00:57 -- spdk/autotest.sh@70 -- # create_test_list 00:05:44.126 10:00:57 -- common/autotest_common.sh@736 -- # xtrace_disable 00:05:44.126 10:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.126 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:05:44.126 10:00:57 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:05:44.126 10:00:57 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:05:44.126 10:00:57 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:05:44.126 10:00:57 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:05:44.126 10:00:57 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:05:44.126 10:00:57 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:05:44.126 10:00:57 -- common/autotest_common.sh@1440 -- # uname 00:05:44.126 10:00:57 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:05:44.126 10:00:57 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:05:44.126 10:00:57 -- common/autotest_common.sh@1460 -- # uname 00:05:44.126 10:00:57 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:05:44.126 10:00:57 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:05:44.126 10:00:57 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=clang 00:05:44.126 10:00:57 -- spdk/autotest.sh@83 -- # hash lcov 00:05:44.126 10:00:57 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:05:44.127 10:00:57 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:44.127 10:00:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:44.127 10:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.127 10:00:57 -- spdk/autotest.sh@102 -- # rm -f 00:05:44.127 10:00:57 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:48.316 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:05:48.316 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:48.316 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:50.218 10:01:03 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:50.218 10:01:03 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:50.218 10:01:03 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:50.218 10:01:03 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:50.218 10:01:03 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:50.218 10:01:03 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:50.218 10:01:03 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:50.218 10:01:03 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:50.218 10:01:03 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:50.218 10:01:03 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:50.218 10:01:03 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:05:50.218 10:01:03 -- spdk/autotest.sh@121 -- # grep -v p 00:05:50.218 10:01:03 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:50.218 10:01:03 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:50.218 10:01:03 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:50.218 10:01:03 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:50.218 10:01:03 -- scripts/common.sh@389 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:50.218 No valid GPT data, bailing 00:05:50.218 10:01:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:50.218 10:01:03 -- scripts/common.sh@393 -- # pt= 00:05:50.218 10:01:03 -- scripts/common.sh@394 -- # return 1 00:05:50.218 10:01:03 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:50.218 1+0 records in 00:05:50.218 1+0 records out 00:05:50.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00535073 s, 196 MB/s 00:05:50.218 10:01:03 -- spdk/autotest.sh@129 -- # sync 00:05:50.218 10:01:03 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:50.218 10:01:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:50.218 10:01:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:55.493 10:01:07 -- spdk/autotest.sh@135 -- # uname -s 00:05:55.493 10:01:07 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:55.493 10:01:07 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:05:55.493 10:01:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.493 10:01:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.493 10:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:55.493 ************************************ 00:05:55.493 START TEST setup.sh 00:05:55.493 ************************************ 00:05:55.493 10:01:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:05:55.493 * Looking for test storage... 00:05:55.493 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:55.493 10:01:07 -- setup/test-setup.sh@10 -- # uname -s 00:05:55.493 10:01:07 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:55.493 10:01:07 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:05:55.493 10:01:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.493 10:01:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.493 10:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:55.493 ************************************ 00:05:55.493 START TEST acl 00:05:55.493 ************************************ 00:05:55.493 10:01:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:05:55.493 * Looking for test storage... 00:05:55.493 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:55.493 10:01:08 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:55.493 10:01:08 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:55.493 10:01:08 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:55.493 10:01:08 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:55.493 10:01:08 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:55.493 10:01:08 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:55.493 10:01:08 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:55.493 10:01:08 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:55.493 10:01:08 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:55.493 10:01:08 -- setup/acl.sh@12 -- # devs=() 00:05:55.493 10:01:08 -- setup/acl.sh@12 -- # declare -a devs 00:05:55.493 10:01:08 -- setup/acl.sh@13 -- # drivers=() 00:05:55.493 10:01:08 -- setup/acl.sh@13 -- # declare -A drivers 00:05:55.493 10:01:08 -- setup/acl.sh@51 -- # setup reset 00:05:55.493 10:01:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:55.493 10:01:08 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:00.777 10:01:13 -- setup/acl.sh@52 -- # collect_setup_devs 00:06:00.777 10:01:13 -- setup/acl.sh@16 -- # local dev driver 00:06:00.777 10:01:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:00.777 10:01:13 -- setup/acl.sh@15 -- # setup output status 00:06:00.777 10:01:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:00.777 10:01:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:06:04.067 Hugepages 00:06:04.067 node hugesize free / total 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # continue 00:06:04.068 10:01:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # continue 00:06:04.068 10:01:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # continue 00:06:04.068 10:01:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 00:06:04.068 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # continue 00:06:04.068 10:01:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:04.068 10:01:16 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:06:04.068 10:01:16 -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:04.068 10:01:16 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:04.068 10:01:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:16 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:17 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:17 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:17 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:17 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:17 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:17 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:17 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:04.068 10:01:17 -- setup/acl.sh@20 -- # continue 00:06:04.068 10:01:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:04.068 10:01:17 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:06:04.068 10:01:17 -- setup/acl.sh@54 -- # run_test denied denied 00:06:04.068 10:01:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.068 10:01:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.068 10:01:17 -- common/autotest_common.sh@10 -- # set +x 00:06:04.068 ************************************ 00:06:04.068 START TEST denied 00:06:04.068 ************************************ 00:06:04.068 10:01:17 -- common/autotest_common.sh@1104 -- # denied 00:06:04.068 10:01:17 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:06:04.068 10:01:17 -- setup/acl.sh@38 -- # setup output config 00:06:04.068 10:01:17 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:06:04.068 10:01:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:04.068 10:01:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:09.336 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:06:09.336 10:01:22 -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:06:09.336 10:01:22 -- setup/acl.sh@28 -- # local dev driver 00:06:09.336 10:01:22 -- setup/acl.sh@30 -- # for dev in "$@" 00:06:09.336 10:01:22 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:06:09.336 10:01:22 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:06:09.336 10:01:22 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:09.336 10:01:22 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:09.336 10:01:22 -- setup/acl.sh@41 -- # setup reset 00:06:09.336 10:01:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:09.336 10:01:22 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:15.902 00:06:15.902 real 0m11.470s 00:06:15.902 user 0m3.197s 00:06:15.902 sys 0m7.275s 00:06:15.902 10:01:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.902 10:01:28 -- common/autotest_common.sh@10 -- # set +x 00:06:15.902 ************************************ 00:06:15.902 END TEST denied 00:06:15.902 ************************************ 00:06:15.902 10:01:28 -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:15.902 10:01:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.902 10:01:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.902 10:01:28 -- common/autotest_common.sh@10 -- # set +x 00:06:15.902 ************************************ 00:06:15.902 START TEST allowed 00:06:15.902 ************************************ 00:06:15.902 10:01:28 -- common/autotest_common.sh@1104 -- # allowed 00:06:15.902 10:01:28 -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:06:15.902 10:01:28 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:06:15.902 10:01:28 -- setup/acl.sh@45 -- # setup output config 00:06:15.902 10:01:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:15.902 10:01:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:24.021 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:24.021 10:01:37 -- setup/acl.sh@47 -- # verify 00:06:24.021 10:01:37 -- setup/acl.sh@28 -- # local dev driver 00:06:24.021 10:01:37 -- setup/acl.sh@48 -- # setup reset 00:06:24.021 10:01:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:24.021 10:01:37 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:30.666 00:06:30.666 real 0m14.300s 00:06:30.666 user 0m3.763s 00:06:30.666 sys 0m7.286s 00:06:30.666 10:01:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.666 10:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:30.666 ************************************ 00:06:30.666 END TEST allowed 00:06:30.666 ************************************ 00:06:30.666 00:06:30.666 real 0m34.932s 00:06:30.666 user 0m10.255s 00:06:30.666 sys 0m20.687s 00:06:30.666 10:01:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.666 10:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:30.666 ************************************ 00:06:30.666 END TEST acl 00:06:30.666 ************************************ 00:06:30.666 10:01:42 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:06:30.666 10:01:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:30.666 10:01:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.666 10:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:30.666 ************************************ 00:06:30.666 START TEST hugepages 00:06:30.666 ************************************ 00:06:30.666 10:01:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:06:30.666 * Looking for test storage... 00:06:30.666 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:06:30.666 10:01:43 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:30.666 10:01:43 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:30.666 10:01:43 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:30.666 10:01:43 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:30.666 10:01:43 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:30.666 10:01:43 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:30.666 10:01:43 -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:30.666 10:01:43 -- setup/common.sh@18 -- # local node= 00:06:30.666 10:01:43 -- setup/common.sh@19 -- # local var val 00:06:30.666 10:01:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:30.666 10:01:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:30.666 10:01:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:30.666 10:01:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:30.666 10:01:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:30.666 10:01:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:30.666 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.666 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.666 10:01:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 72485956 kB' 'MemAvailable: 77504948 kB' 'Buffers: 20532 kB' 'Cached: 12137784 kB' 'SwapCached: 0 kB' 'Active: 7939700 kB' 'Inactive: 4745084 kB' 'Active(anon): 7309464 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529744 kB' 'Mapped: 179152 kB' 'Shmem: 6782996 kB' 'KReclaimable: 479856 kB' 'Slab: 881760 kB' 'SReclaimable: 479856 kB' 'SUnreclaim: 401904 kB' 'KernelStack: 16064 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52438216 kB' 'Committed_AS: 8706408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209484 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:30.666 10:01:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.666 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.666 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.666 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.666 10:01:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.666 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.666 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.666 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.666 10:01:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.666 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.666 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.666 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.666 10:01:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.666 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.666 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.666 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.666 10:01:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.666 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.666 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # continue 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:30.667 10:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:30.667 10:01:43 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:30.667 10:01:43 -- setup/common.sh@33 -- # echo 2048 00:06:30.667 10:01:43 -- setup/common.sh@33 -- # return 0 00:06:30.667 10:01:43 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:30.667 10:01:43 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:30.667 10:01:43 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:30.667 10:01:43 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:30.667 10:01:43 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:30.667 10:01:43 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:30.667 10:01:43 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:30.667 10:01:43 -- setup/hugepages.sh@207 -- # get_nodes 00:06:30.667 10:01:43 -- setup/hugepages.sh@27 -- # local node 00:06:30.667 10:01:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:30.667 10:01:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:30.667 10:01:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:30.667 10:01:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:30.667 10:01:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:30.667 10:01:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:30.667 10:01:43 -- setup/hugepages.sh@208 -- # clear_hp 00:06:30.667 10:01:43 -- setup/hugepages.sh@37 -- # local node hp 00:06:30.667 10:01:43 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:30.667 10:01:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:30.667 10:01:43 -- setup/hugepages.sh@41 -- # echo 0 00:06:30.667 10:01:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:30.667 10:01:43 -- setup/hugepages.sh@41 -- # echo 0 00:06:30.667 10:01:43 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:30.667 10:01:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:30.667 10:01:43 -- setup/hugepages.sh@41 -- # echo 0 00:06:30.667 10:01:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:30.667 10:01:43 -- setup/hugepages.sh@41 -- # echo 0 00:06:30.667 10:01:43 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:30.667 10:01:43 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:30.667 10:01:43 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:30.667 10:01:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:30.667 10:01:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.667 10:01:43 -- common/autotest_common.sh@10 -- # set +x 00:06:30.667 ************************************ 00:06:30.667 START TEST default_setup 00:06:30.667 ************************************ 00:06:30.667 10:01:43 -- common/autotest_common.sh@1104 -- # default_setup 00:06:30.667 10:01:43 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:30.667 10:01:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:30.667 10:01:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:30.667 10:01:43 -- setup/hugepages.sh@51 -- # shift 00:06:30.667 10:01:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:30.667 10:01:43 -- setup/hugepages.sh@52 -- # local node_ids 00:06:30.667 10:01:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:30.667 10:01:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:30.667 10:01:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:30.667 10:01:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:30.667 10:01:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:30.667 10:01:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:30.667 10:01:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:30.667 10:01:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:30.667 10:01:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:30.667 10:01:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:30.667 10:01:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:30.667 10:01:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:30.667 10:01:43 -- setup/hugepages.sh@73 -- # return 0 00:06:30.667 10:01:43 -- setup/hugepages.sh@137 -- # setup output 00:06:30.667 10:01:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:30.667 10:01:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:33.199 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:33.458 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:36.749 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:38.729 10:01:51 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:38.729 10:01:51 -- setup/hugepages.sh@89 -- # local node 00:06:38.729 10:01:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:38.729 10:01:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:38.729 10:01:51 -- setup/hugepages.sh@92 -- # local surp 00:06:38.729 10:01:51 -- setup/hugepages.sh@93 -- # local resv 00:06:38.729 10:01:51 -- setup/hugepages.sh@94 -- # local anon 00:06:38.729 10:01:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:38.729 10:01:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:38.729 10:01:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:38.729 10:01:51 -- setup/common.sh@18 -- # local node= 00:06:38.729 10:01:51 -- setup/common.sh@19 -- # local var val 00:06:38.729 10:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:06:38.729 10:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:38.729 10:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:38.729 10:01:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:38.729 10:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:06:38.729 10:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74681232 kB' 'MemAvailable: 79700224 kB' 'Buffers: 20532 kB' 'Cached: 12146144 kB' 'SwapCached: 0 kB' 'Active: 7963616 kB' 'Inactive: 4745084 kB' 'Active(anon): 7333380 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544440 kB' 'Mapped: 179172 kB' 'Shmem: 6791356 kB' 'KReclaimable: 479856 kB' 'Slab: 880656 kB' 'SReclaimable: 479856 kB' 'SUnreclaim: 400800 kB' 'KernelStack: 16128 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8730948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209436 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.729 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.729 10:01:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:38.730 10:01:51 -- setup/common.sh@33 -- # echo 0 00:06:38.730 10:01:51 -- setup/common.sh@33 -- # return 0 00:06:38.730 10:01:51 -- setup/hugepages.sh@97 -- # anon=0 00:06:38.730 10:01:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:38.730 10:01:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:38.730 10:01:51 -- setup/common.sh@18 -- # local node= 00:06:38.730 10:01:51 -- setup/common.sh@19 -- # local var val 00:06:38.730 10:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:06:38.730 10:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:38.730 10:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:38.730 10:01:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:38.730 10:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:06:38.730 10:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74682040 kB' 'MemAvailable: 79701032 kB' 'Buffers: 20532 kB' 'Cached: 12146144 kB' 'SwapCached: 0 kB' 'Active: 7963012 kB' 'Inactive: 4745084 kB' 'Active(anon): 7332776 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544816 kB' 'Mapped: 179088 kB' 'Shmem: 6791356 kB' 'KReclaimable: 479856 kB' 'Slab: 880660 kB' 'SReclaimable: 479856 kB' 'SUnreclaim: 400804 kB' 'KernelStack: 16128 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8738972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209452 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.730 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.730 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.731 10:01:51 -- setup/common.sh@33 -- # echo 0 00:06:38.731 10:01:51 -- setup/common.sh@33 -- # return 0 00:06:38.731 10:01:51 -- setup/hugepages.sh@99 -- # surp=0 00:06:38.731 10:01:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:38.731 10:01:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:38.731 10:01:51 -- setup/common.sh@18 -- # local node= 00:06:38.731 10:01:51 -- setup/common.sh@19 -- # local var val 00:06:38.731 10:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:06:38.731 10:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:38.731 10:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:38.731 10:01:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:38.731 10:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:06:38.731 10:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74683112 kB' 'MemAvailable: 79702104 kB' 'Buffers: 20532 kB' 'Cached: 12146148 kB' 'SwapCached: 0 kB' 'Active: 7962508 kB' 'Inactive: 4745084 kB' 'Active(anon): 7332272 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544312 kB' 'Mapped: 179148 kB' 'Shmem: 6791360 kB' 'KReclaimable: 479856 kB' 'Slab: 880652 kB' 'SReclaimable: 479856 kB' 'SUnreclaim: 400796 kB' 'KernelStack: 16080 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8730612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209420 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.731 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.731 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:38.732 10:01:51 -- setup/common.sh@33 -- # echo 0 00:06:38.732 10:01:51 -- setup/common.sh@33 -- # return 0 00:06:38.732 10:01:51 -- setup/hugepages.sh@100 -- # resv=0 00:06:38.732 10:01:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:38.732 nr_hugepages=1024 00:06:38.732 10:01:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:38.732 resv_hugepages=0 00:06:38.732 10:01:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:38.732 surplus_hugepages=0 00:06:38.732 10:01:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:38.732 anon_hugepages=0 00:06:38.732 10:01:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:38.732 10:01:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:38.732 10:01:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:38.732 10:01:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:38.732 10:01:51 -- setup/common.sh@18 -- # local node= 00:06:38.732 10:01:51 -- setup/common.sh@19 -- # local var val 00:06:38.732 10:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:06:38.732 10:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:38.732 10:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:38.732 10:01:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:38.732 10:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:06:38.732 10:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74682860 kB' 'MemAvailable: 79701852 kB' 'Buffers: 20532 kB' 'Cached: 12146180 kB' 'SwapCached: 0 kB' 'Active: 7962568 kB' 'Inactive: 4745084 kB' 'Active(anon): 7332332 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544312 kB' 'Mapped: 179088 kB' 'Shmem: 6791392 kB' 'KReclaimable: 479856 kB' 'Slab: 880652 kB' 'SReclaimable: 479856 kB' 'SUnreclaim: 400796 kB' 'KernelStack: 16064 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8730628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209420 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.732 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.732 10:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:38.733 10:01:51 -- setup/common.sh@33 -- # echo 1024 00:06:38.733 10:01:51 -- setup/common.sh@33 -- # return 0 00:06:38.733 10:01:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:38.733 10:01:51 -- setup/hugepages.sh@112 -- # get_nodes 00:06:38.733 10:01:51 -- setup/hugepages.sh@27 -- # local node 00:06:38.733 10:01:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:38.733 10:01:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:38.733 10:01:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:38.733 10:01:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:38.733 10:01:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:38.733 10:01:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:38.733 10:01:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:38.733 10:01:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:38.733 10:01:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:38.733 10:01:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:38.733 10:01:51 -- setup/common.sh@18 -- # local node=0 00:06:38.733 10:01:51 -- setup/common.sh@19 -- # local var val 00:06:38.733 10:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:06:38.733 10:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:38.733 10:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:38.733 10:01:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:38.733 10:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:06:38.733 10:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116964 kB' 'MemFree: 40417700 kB' 'MemUsed: 7699264 kB' 'SwapCached: 0 kB' 'Active: 2955724 kB' 'Inactive: 607744 kB' 'Active(anon): 2576088 kB' 'Inactive(anon): 0 kB' 'Active(file): 379636 kB' 'Inactive(file): 607744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3190480 kB' 'Mapped: 131404 kB' 'AnonPages: 376168 kB' 'Shmem: 2203100 kB' 'KernelStack: 10024 kB' 'PageTables: 5656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 316892 kB' 'Slab: 547428 kB' 'SReclaimable: 316892 kB' 'SUnreclaim: 230536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # continue 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:06:38.733 10:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:06:38.733 10:01:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:38.733 10:01:51 -- setup/common.sh@33 -- # echo 0 00:06:38.733 10:01:51 -- setup/common.sh@33 -- # return 0 00:06:38.733 10:01:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:38.733 10:01:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:38.733 10:01:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:38.733 10:01:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:38.733 10:01:51 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:38.733 node0=1024 expecting 1024 00:06:38.733 10:01:51 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:38.733 00:06:38.733 real 0m8.758s 00:06:38.733 user 0m2.014s 00:06:38.733 sys 0m3.705s 00:06:38.733 10:01:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.733 10:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:38.733 ************************************ 00:06:38.733 END TEST default_setup 00:06:38.733 ************************************ 00:06:38.733 10:01:51 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:38.733 10:01:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:38.733 10:01:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.733 10:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:38.733 ************************************ 00:06:38.733 START TEST per_node_1G_alloc 00:06:38.733 ************************************ 00:06:38.733 10:01:51 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:06:38.733 10:01:51 -- setup/hugepages.sh@143 -- # local IFS=, 00:06:38.733 10:01:51 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:06:38.733 10:01:51 -- setup/hugepages.sh@49 -- # local size=1048576 00:06:38.733 10:01:51 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:06:38.733 10:01:51 -- setup/hugepages.sh@51 -- # shift 00:06:38.733 10:01:51 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:06:38.733 10:01:51 -- setup/hugepages.sh@52 -- # local node_ids 00:06:38.733 10:01:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:38.733 10:01:51 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:38.733 10:01:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:06:38.733 10:01:51 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:06:38.733 10:01:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:38.733 10:01:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:38.733 10:01:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:38.733 10:01:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:38.733 10:01:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:38.733 10:01:51 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:06:38.734 10:01:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:38.734 10:01:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:38.734 10:01:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:38.734 10:01:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:38.734 10:01:51 -- setup/hugepages.sh@73 -- # return 0 00:06:38.734 10:01:51 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:38.734 10:01:51 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:06:38.734 10:01:51 -- setup/hugepages.sh@146 -- # setup output 00:06:38.734 10:01:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:38.734 10:01:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:42.919 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:42.919 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:42.919 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:44.298 10:01:57 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:06:44.298 10:01:57 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:44.298 10:01:57 -- setup/hugepages.sh@89 -- # local node 00:06:44.299 10:01:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:44.299 10:01:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:44.299 10:01:57 -- setup/hugepages.sh@92 -- # local surp 00:06:44.299 10:01:57 -- setup/hugepages.sh@93 -- # local resv 00:06:44.299 10:01:57 -- setup/hugepages.sh@94 -- # local anon 00:06:44.299 10:01:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:44.299 10:01:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:44.299 10:01:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:44.299 10:01:57 -- setup/common.sh@18 -- # local node= 00:06:44.299 10:01:57 -- setup/common.sh@19 -- # local var val 00:06:44.299 10:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:06:44.299 10:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.299 10:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:44.299 10:01:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:44.299 10:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.299 10:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74687124 kB' 'MemAvailable: 79706108 kB' 'Buffers: 20532 kB' 'Cached: 12146296 kB' 'SwapCached: 0 kB' 'Active: 7963348 kB' 'Inactive: 4745084 kB' 'Active(anon): 7333112 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544608 kB' 'Mapped: 178336 kB' 'Shmem: 6791508 kB' 'KReclaimable: 479848 kB' 'Slab: 880660 kB' 'SReclaimable: 479848 kB' 'SUnreclaim: 400812 kB' 'KernelStack: 16240 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8719656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209404 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.299 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.299 10:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.300 10:01:57 -- setup/common.sh@33 -- # echo 0 00:06:44.300 10:01:57 -- setup/common.sh@33 -- # return 0 00:06:44.300 10:01:57 -- setup/hugepages.sh@97 -- # anon=0 00:06:44.300 10:01:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:44.300 10:01:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:44.300 10:01:57 -- setup/common.sh@18 -- # local node= 00:06:44.300 10:01:57 -- setup/common.sh@19 -- # local var val 00:06:44.300 10:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:06:44.300 10:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.300 10:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:44.300 10:01:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:44.300 10:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.300 10:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74689272 kB' 'MemAvailable: 79708256 kB' 'Buffers: 20532 kB' 'Cached: 12146300 kB' 'SwapCached: 0 kB' 'Active: 7962268 kB' 'Inactive: 4745084 kB' 'Active(anon): 7332032 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543964 kB' 'Mapped: 178248 kB' 'Shmem: 6791512 kB' 'KReclaimable: 479848 kB' 'Slab: 880636 kB' 'SReclaimable: 479848 kB' 'SUnreclaim: 400788 kB' 'KernelStack: 16080 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8719668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209388 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.300 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.300 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.301 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.301 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.302 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.302 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.302 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.302 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.302 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.302 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.302 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.302 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.302 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.302 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.302 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.302 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.302 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.302 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.302 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.302 10:01:57 -- setup/common.sh@33 -- # echo 0 00:06:44.302 10:01:57 -- setup/common.sh@33 -- # return 0 00:06:44.302 10:01:57 -- setup/hugepages.sh@99 -- # surp=0 00:06:44.302 10:01:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:44.302 10:01:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:44.302 10:01:57 -- setup/common.sh@18 -- # local node= 00:06:44.302 10:01:57 -- setup/common.sh@19 -- # local var val 00:06:44.302 10:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:06:44.302 10:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.302 10:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:44.302 10:01:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:44.302 10:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.302 10:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.302 10:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74690132 kB' 'MemAvailable: 79709116 kB' 'Buffers: 20532 kB' 'Cached: 12146312 kB' 'SwapCached: 0 kB' 'Active: 7962288 kB' 'Inactive: 4745084 kB' 'Active(anon): 7332052 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543964 kB' 'Mapped: 178248 kB' 'Shmem: 6791524 kB' 'KReclaimable: 479848 kB' 'Slab: 880636 kB' 'SReclaimable: 479848 kB' 'SUnreclaim: 400788 kB' 'KernelStack: 16080 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8719684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209388 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:44.302 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.302 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.302 10:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.302 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.302 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.302 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.302 10:01:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.563 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.563 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.563 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.563 10:01:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.563 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.563 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.564 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.564 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.565 10:01:57 -- setup/common.sh@33 -- # echo 0 00:06:44.565 10:01:57 -- setup/common.sh@33 -- # return 0 00:06:44.565 10:01:57 -- setup/hugepages.sh@100 -- # resv=0 00:06:44.565 10:01:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:44.565 nr_hugepages=1024 00:06:44.565 10:01:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:44.565 resv_hugepages=0 00:06:44.565 10:01:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:44.565 surplus_hugepages=0 00:06:44.565 10:01:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:44.565 anon_hugepages=0 00:06:44.565 10:01:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:44.565 10:01:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:44.565 10:01:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:44.565 10:01:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:44.565 10:01:57 -- setup/common.sh@18 -- # local node= 00:06:44.565 10:01:57 -- setup/common.sh@19 -- # local var val 00:06:44.565 10:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:06:44.565 10:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.565 10:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:44.565 10:01:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:44.565 10:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.565 10:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74691000 kB' 'MemAvailable: 79709984 kB' 'Buffers: 20532 kB' 'Cached: 12146324 kB' 'SwapCached: 0 kB' 'Active: 7961868 kB' 'Inactive: 4745084 kB' 'Active(anon): 7331632 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543512 kB' 'Mapped: 178248 kB' 'Shmem: 6791536 kB' 'KReclaimable: 479848 kB' 'Slab: 880636 kB' 'SReclaimable: 479848 kB' 'SUnreclaim: 400788 kB' 'KernelStack: 16064 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8719700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209404 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.565 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.565 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.566 10:01:57 -- setup/common.sh@33 -- # echo 1024 00:06:44.566 10:01:57 -- setup/common.sh@33 -- # return 0 00:06:44.566 10:01:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:44.566 10:01:57 -- setup/hugepages.sh@112 -- # get_nodes 00:06:44.566 10:01:57 -- setup/hugepages.sh@27 -- # local node 00:06:44.566 10:01:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:44.566 10:01:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:44.566 10:01:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:44.566 10:01:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:44.566 10:01:57 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:44.566 10:01:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:44.566 10:01:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:44.566 10:01:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:44.566 10:01:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:44.566 10:01:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:44.566 10:01:57 -- setup/common.sh@18 -- # local node=0 00:06:44.566 10:01:57 -- setup/common.sh@19 -- # local var val 00:06:44.566 10:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:06:44.566 10:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.566 10:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:44.566 10:01:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:44.566 10:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.566 10:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116964 kB' 'MemFree: 41461156 kB' 'MemUsed: 6655808 kB' 'SwapCached: 0 kB' 'Active: 2954648 kB' 'Inactive: 607744 kB' 'Active(anon): 2575012 kB' 'Inactive(anon): 0 kB' 'Active(file): 379636 kB' 'Inactive(file): 607744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3190512 kB' 'Mapped: 130508 kB' 'AnonPages: 375120 kB' 'Shmem: 2203132 kB' 'KernelStack: 9976 kB' 'PageTables: 5368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 316892 kB' 'Slab: 547392 kB' 'SReclaimable: 316892 kB' 'SUnreclaim: 230500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.566 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.566 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.567 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.567 10:01:57 -- setup/common.sh@33 -- # echo 0 00:06:44.567 10:01:57 -- setup/common.sh@33 -- # return 0 00:06:44.567 10:01:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:44.567 10:01:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:44.567 10:01:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:44.567 10:01:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:44.567 10:01:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:44.567 10:01:57 -- setup/common.sh@18 -- # local node=1 00:06:44.567 10:01:57 -- setup/common.sh@19 -- # local var val 00:06:44.567 10:01:57 -- setup/common.sh@20 -- # local mem_f mem 00:06:44.567 10:01:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.567 10:01:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:44.567 10:01:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:44.567 10:01:57 -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.567 10:01:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.567 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176564 kB' 'MemFree: 33231576 kB' 'MemUsed: 10944988 kB' 'SwapCached: 0 kB' 'Active: 5007684 kB' 'Inactive: 4137340 kB' 'Active(anon): 4757084 kB' 'Inactive(anon): 0 kB' 'Active(file): 250600 kB' 'Inactive(file): 4137340 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8976376 kB' 'Mapped: 47740 kB' 'AnonPages: 168844 kB' 'Shmem: 4588436 kB' 'KernelStack: 6104 kB' 'PageTables: 2896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162956 kB' 'Slab: 333244 kB' 'SReclaimable: 162956 kB' 'SUnreclaim: 170288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.568 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.568 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # continue 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # IFS=': ' 00:06:44.569 10:01:57 -- setup/common.sh@31 -- # read -r var val _ 00:06:44.569 10:01:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.569 10:01:57 -- setup/common.sh@33 -- # echo 0 00:06:44.569 10:01:57 -- setup/common.sh@33 -- # return 0 00:06:44.569 10:01:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:44.569 10:01:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:44.569 10:01:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:44.569 10:01:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:44.569 10:01:57 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:44.569 node0=512 expecting 512 00:06:44.569 10:01:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:44.569 10:01:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:44.569 10:01:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:44.569 10:01:57 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:44.569 node1=512 expecting 512 00:06:44.569 10:01:57 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:44.569 00:06:44.569 real 0m5.760s 00:06:44.569 user 0m2.096s 00:06:44.569 sys 0m3.724s 00:06:44.569 10:01:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.569 10:01:57 -- common/autotest_common.sh@10 -- # set +x 00:06:44.569 ************************************ 00:06:44.569 END TEST per_node_1G_alloc 00:06:44.569 ************************************ 00:06:44.569 10:01:57 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:44.569 10:01:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.569 10:01:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.569 10:01:57 -- common/autotest_common.sh@10 -- # set +x 00:06:44.569 ************************************ 00:06:44.569 START TEST even_2G_alloc 00:06:44.569 ************************************ 00:06:44.569 10:01:57 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:06:44.569 10:01:57 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:44.569 10:01:57 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:44.569 10:01:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:44.569 10:01:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:44.569 10:01:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:44.569 10:01:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:44.569 10:01:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:44.569 10:01:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:44.569 10:01:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:44.569 10:01:57 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:44.569 10:01:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:44.569 10:01:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:44.569 10:01:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:44.569 10:01:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:44.569 10:01:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:44.569 10:01:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:44.569 10:01:57 -- setup/hugepages.sh@83 -- # : 512 00:06:44.569 10:01:57 -- setup/hugepages.sh@84 -- # : 1 00:06:44.569 10:01:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:44.569 10:01:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:44.569 10:01:57 -- setup/hugepages.sh@83 -- # : 0 00:06:44.569 10:01:57 -- setup/hugepages.sh@84 -- # : 0 00:06:44.569 10:01:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:44.569 10:01:57 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:44.569 10:01:57 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:44.569 10:01:57 -- setup/hugepages.sh@153 -- # setup output 00:06:44.569 10:01:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:44.569 10:01:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:47.851 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:47.851 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:47.851 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:49.756 10:02:02 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:49.756 10:02:02 -- setup/hugepages.sh@89 -- # local node 00:06:49.756 10:02:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:49.756 10:02:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:49.756 10:02:02 -- setup/hugepages.sh@92 -- # local surp 00:06:49.756 10:02:02 -- setup/hugepages.sh@93 -- # local resv 00:06:49.756 10:02:02 -- setup/hugepages.sh@94 -- # local anon 00:06:49.756 10:02:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:49.756 10:02:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:49.756 10:02:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:49.756 10:02:02 -- setup/common.sh@18 -- # local node= 00:06:49.756 10:02:02 -- setup/common.sh@19 -- # local var val 00:06:49.756 10:02:02 -- setup/common.sh@20 -- # local mem_f mem 00:06:49.756 10:02:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.756 10:02:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:49.756 10:02:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:49.756 10:02:02 -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.756 10:02:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.756 10:02:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74719572 kB' 'MemAvailable: 79738548 kB' 'Buffers: 20532 kB' 'Cached: 12146444 kB' 'SwapCached: 0 kB' 'Active: 7963740 kB' 'Inactive: 4745084 kB' 'Active(anon): 7333504 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544760 kB' 'Mapped: 178392 kB' 'Shmem: 6791656 kB' 'KReclaimable: 479840 kB' 'Slab: 881136 kB' 'SReclaimable: 479840 kB' 'SUnreclaim: 401296 kB' 'KernelStack: 16112 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8720200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209500 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.756 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.756 10:02:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:49.757 10:02:02 -- setup/common.sh@33 -- # echo 0 00:06:49.757 10:02:02 -- setup/common.sh@33 -- # return 0 00:06:49.757 10:02:02 -- setup/hugepages.sh@97 -- # anon=0 00:06:49.757 10:02:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:49.757 10:02:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:49.757 10:02:02 -- setup/common.sh@18 -- # local node= 00:06:49.757 10:02:02 -- setup/common.sh@19 -- # local var val 00:06:49.757 10:02:02 -- setup/common.sh@20 -- # local mem_f mem 00:06:49.757 10:02:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.757 10:02:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:49.757 10:02:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:49.757 10:02:02 -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.757 10:02:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.757 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.757 10:02:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74720224 kB' 'MemAvailable: 79739200 kB' 'Buffers: 20532 kB' 'Cached: 12146444 kB' 'SwapCached: 0 kB' 'Active: 7964424 kB' 'Inactive: 4745084 kB' 'Active(anon): 7334188 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545460 kB' 'Mapped: 178392 kB' 'Shmem: 6791656 kB' 'KReclaimable: 479840 kB' 'Slab: 881128 kB' 'SReclaimable: 479840 kB' 'SUnreclaim: 401288 kB' 'KernelStack: 16112 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8720212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209484 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:49.757 10:02:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.758 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.758 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.759 10:02:02 -- setup/common.sh@33 -- # echo 0 00:06:49.759 10:02:02 -- setup/common.sh@33 -- # return 0 00:06:49.759 10:02:02 -- setup/hugepages.sh@99 -- # surp=0 00:06:49.759 10:02:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:49.759 10:02:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:49.759 10:02:02 -- setup/common.sh@18 -- # local node= 00:06:49.759 10:02:02 -- setup/common.sh@19 -- # local var val 00:06:49.759 10:02:02 -- setup/common.sh@20 -- # local mem_f mem 00:06:49.759 10:02:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.759 10:02:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:49.759 10:02:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:49.759 10:02:02 -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.759 10:02:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74720492 kB' 'MemAvailable: 79739468 kB' 'Buffers: 20532 kB' 'Cached: 12146456 kB' 'SwapCached: 0 kB' 'Active: 7963132 kB' 'Inactive: 4745084 kB' 'Active(anon): 7332896 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544588 kB' 'Mapped: 178308 kB' 'Shmem: 6791668 kB' 'KReclaimable: 479840 kB' 'Slab: 881104 kB' 'SReclaimable: 479840 kB' 'SUnreclaim: 401264 kB' 'KernelStack: 16064 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8720224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209500 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.759 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.759 10:02:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:49.760 10:02:02 -- setup/common.sh@33 -- # echo 0 00:06:49.760 10:02:02 -- setup/common.sh@33 -- # return 0 00:06:49.760 10:02:02 -- setup/hugepages.sh@100 -- # resv=0 00:06:49.760 10:02:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:49.760 nr_hugepages=1024 00:06:49.760 10:02:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:49.760 resv_hugepages=0 00:06:49.760 10:02:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:49.760 surplus_hugepages=0 00:06:49.760 10:02:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:49.760 anon_hugepages=0 00:06:49.760 10:02:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:49.760 10:02:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:49.760 10:02:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:49.760 10:02:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:49.760 10:02:02 -- setup/common.sh@18 -- # local node= 00:06:49.760 10:02:02 -- setup/common.sh@19 -- # local var val 00:06:49.760 10:02:02 -- setup/common.sh@20 -- # local mem_f mem 00:06:49.760 10:02:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.760 10:02:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:49.760 10:02:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:49.760 10:02:02 -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.760 10:02:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74720492 kB' 'MemAvailable: 79739468 kB' 'Buffers: 20532 kB' 'Cached: 12146472 kB' 'SwapCached: 0 kB' 'Active: 7963080 kB' 'Inactive: 4745084 kB' 'Active(anon): 7332844 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544508 kB' 'Mapped: 178308 kB' 'Shmem: 6791684 kB' 'KReclaimable: 479840 kB' 'Slab: 881108 kB' 'SReclaimable: 479840 kB' 'SUnreclaim: 401268 kB' 'KernelStack: 16112 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8720240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209500 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.760 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.760 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.761 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.761 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:49.762 10:02:02 -- setup/common.sh@33 -- # echo 1024 00:06:49.762 10:02:02 -- setup/common.sh@33 -- # return 0 00:06:49.762 10:02:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:49.762 10:02:02 -- setup/hugepages.sh@112 -- # get_nodes 00:06:49.762 10:02:02 -- setup/hugepages.sh@27 -- # local node 00:06:49.762 10:02:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:49.762 10:02:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:49.762 10:02:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:49.762 10:02:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:49.762 10:02:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:49.762 10:02:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:49.762 10:02:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:49.762 10:02:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:49.762 10:02:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:49.762 10:02:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:49.762 10:02:02 -- setup/common.sh@18 -- # local node=0 00:06:49.762 10:02:02 -- setup/common.sh@19 -- # local var val 00:06:49.762 10:02:02 -- setup/common.sh@20 -- # local mem_f mem 00:06:49.762 10:02:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.762 10:02:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:49.762 10:02:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:49.762 10:02:02 -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.762 10:02:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116964 kB' 'MemFree: 41481408 kB' 'MemUsed: 6635556 kB' 'SwapCached: 0 kB' 'Active: 2954484 kB' 'Inactive: 607744 kB' 'Active(anon): 2574848 kB' 'Inactive(anon): 0 kB' 'Active(file): 379636 kB' 'Inactive(file): 607744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3190588 kB' 'Mapped: 130508 kB' 'AnonPages: 374900 kB' 'Shmem: 2203208 kB' 'KernelStack: 9976 kB' 'PageTables: 5368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 316892 kB' 'Slab: 548064 kB' 'SReclaimable: 316892 kB' 'SUnreclaim: 231172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.762 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.762 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@33 -- # echo 0 00:06:49.763 10:02:02 -- setup/common.sh@33 -- # return 0 00:06:49.763 10:02:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:49.763 10:02:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:49.763 10:02:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:49.763 10:02:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:49.763 10:02:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:49.763 10:02:02 -- setup/common.sh@18 -- # local node=1 00:06:49.763 10:02:02 -- setup/common.sh@19 -- # local var val 00:06:49.763 10:02:02 -- setup/common.sh@20 -- # local mem_f mem 00:06:49.763 10:02:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:49.763 10:02:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:49.763 10:02:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:49.763 10:02:02 -- setup/common.sh@28 -- # mapfile -t mem 00:06:49.763 10:02:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:49.763 10:02:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176564 kB' 'MemFree: 33238328 kB' 'MemUsed: 10938236 kB' 'SwapCached: 0 kB' 'Active: 5008384 kB' 'Inactive: 4137340 kB' 'Active(anon): 4757784 kB' 'Inactive(anon): 0 kB' 'Active(file): 250600 kB' 'Inactive(file): 4137340 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8976432 kB' 'Mapped: 47800 kB' 'AnonPages: 169396 kB' 'Shmem: 4588492 kB' 'KernelStack: 6104 kB' 'PageTables: 2888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162948 kB' 'Slab: 333044 kB' 'SReclaimable: 162948 kB' 'SUnreclaim: 170096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.763 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.763 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # continue 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:06:49.764 10:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:06:49.764 10:02:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:49.764 10:02:02 -- setup/common.sh@33 -- # echo 0 00:06:49.764 10:02:02 -- setup/common.sh@33 -- # return 0 00:06:49.764 10:02:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:49.764 10:02:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:49.764 10:02:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:49.764 10:02:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:49.764 10:02:02 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:49.764 node0=512 expecting 512 00:06:49.764 10:02:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:49.764 10:02:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:49.764 10:02:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:49.764 10:02:02 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:49.764 node1=512 expecting 512 00:06:49.764 10:02:02 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:49.764 00:06:49.764 real 0m5.099s 00:06:49.764 user 0m1.504s 00:06:49.764 sys 0m3.508s 00:06:49.764 10:02:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.764 10:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:49.764 ************************************ 00:06:49.764 END TEST even_2G_alloc 00:06:49.764 ************************************ 00:06:49.764 10:02:02 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:49.764 10:02:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:49.764 10:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.764 10:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:49.764 ************************************ 00:06:49.764 START TEST odd_alloc 00:06:49.764 ************************************ 00:06:49.764 10:02:02 -- common/autotest_common.sh@1104 -- # odd_alloc 00:06:49.764 10:02:02 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:49.764 10:02:02 -- setup/hugepages.sh@49 -- # local size=2098176 00:06:49.764 10:02:02 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:49.764 10:02:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:49.764 10:02:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:49.764 10:02:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:49.764 10:02:02 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:49.764 10:02:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:49.764 10:02:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:49.764 10:02:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:49.764 10:02:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:49.764 10:02:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:49.764 10:02:02 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:49.764 10:02:02 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:49.764 10:02:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:49.764 10:02:02 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:49.764 10:02:02 -- setup/hugepages.sh@83 -- # : 513 00:06:49.764 10:02:02 -- setup/hugepages.sh@84 -- # : 1 00:06:49.764 10:02:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:49.764 10:02:02 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:06:49.764 10:02:02 -- setup/hugepages.sh@83 -- # : 0 00:06:49.764 10:02:02 -- setup/hugepages.sh@84 -- # : 0 00:06:49.764 10:02:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:49.764 10:02:02 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:49.764 10:02:02 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:49.764 10:02:02 -- setup/hugepages.sh@160 -- # setup output 00:06:49.764 10:02:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:49.764 10:02:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:53.061 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:53.061 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:53.061 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:54.970 10:02:08 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:54.970 10:02:08 -- setup/hugepages.sh@89 -- # local node 00:06:54.970 10:02:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:54.970 10:02:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:54.970 10:02:08 -- setup/hugepages.sh@92 -- # local surp 00:06:54.970 10:02:08 -- setup/hugepages.sh@93 -- # local resv 00:06:54.970 10:02:08 -- setup/hugepages.sh@94 -- # local anon 00:06:54.970 10:02:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:54.970 10:02:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:54.970 10:02:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:54.970 10:02:08 -- setup/common.sh@18 -- # local node= 00:06:54.970 10:02:08 -- setup/common.sh@19 -- # local var val 00:06:54.970 10:02:08 -- setup/common.sh@20 -- # local mem_f mem 00:06:54.970 10:02:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.970 10:02:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:54.970 10:02:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:54.970 10:02:08 -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.970 10:02:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74720532 kB' 'MemAvailable: 79739508 kB' 'Buffers: 20532 kB' 'Cached: 12146600 kB' 'SwapCached: 0 kB' 'Active: 7962064 kB' 'Inactive: 4745084 kB' 'Active(anon): 7331828 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542856 kB' 'Mapped: 178488 kB' 'Shmem: 6791812 kB' 'KReclaimable: 479840 kB' 'Slab: 880432 kB' 'SReclaimable: 479840 kB' 'SUnreclaim: 400592 kB' 'KernelStack: 16064 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 8721004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209532 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.970 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.970 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.971 10:02:08 -- setup/common.sh@33 -- # echo 0 00:06:54.971 10:02:08 -- setup/common.sh@33 -- # return 0 00:06:54.971 10:02:08 -- setup/hugepages.sh@97 -- # anon=0 00:06:54.971 10:02:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:54.971 10:02:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:54.971 10:02:08 -- setup/common.sh@18 -- # local node= 00:06:54.971 10:02:08 -- setup/common.sh@19 -- # local var val 00:06:54.971 10:02:08 -- setup/common.sh@20 -- # local mem_f mem 00:06:54.971 10:02:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.971 10:02:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:54.971 10:02:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:54.971 10:02:08 -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.971 10:02:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74720532 kB' 'MemAvailable: 79739508 kB' 'Buffers: 20532 kB' 'Cached: 12146600 kB' 'SwapCached: 0 kB' 'Active: 7962984 kB' 'Inactive: 4745084 kB' 'Active(anon): 7332748 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543772 kB' 'Mapped: 178500 kB' 'Shmem: 6791812 kB' 'KReclaimable: 479840 kB' 'Slab: 880428 kB' 'SReclaimable: 479840 kB' 'SUnreclaim: 400588 kB' 'KernelStack: 16096 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 8721016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209500 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.971 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.971 10:02:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.972 10:02:08 -- setup/common.sh@33 -- # echo 0 00:06:54.972 10:02:08 -- setup/common.sh@33 -- # return 0 00:06:54.972 10:02:08 -- setup/hugepages.sh@99 -- # surp=0 00:06:54.972 10:02:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:54.972 10:02:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:54.972 10:02:08 -- setup/common.sh@18 -- # local node= 00:06:54.972 10:02:08 -- setup/common.sh@19 -- # local var val 00:06:54.972 10:02:08 -- setup/common.sh@20 -- # local mem_f mem 00:06:54.972 10:02:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.972 10:02:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:54.972 10:02:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:54.972 10:02:08 -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.972 10:02:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.972 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.972 10:02:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74720808 kB' 'MemAvailable: 79739784 kB' 'Buffers: 20532 kB' 'Cached: 12146612 kB' 'SwapCached: 0 kB' 'Active: 7962036 kB' 'Inactive: 4745084 kB' 'Active(anon): 7331800 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543332 kB' 'Mapped: 178384 kB' 'Shmem: 6791824 kB' 'KReclaimable: 479840 kB' 'Slab: 880424 kB' 'SReclaimable: 479840 kB' 'SUnreclaim: 400584 kB' 'KernelStack: 16096 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 8721028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209500 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:54.972 10:02:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.973 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.973 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.974 10:02:08 -- setup/common.sh@33 -- # echo 0 00:06:54.974 10:02:08 -- setup/common.sh@33 -- # return 0 00:06:54.974 10:02:08 -- setup/hugepages.sh@100 -- # resv=0 00:06:54.974 10:02:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:54.974 nr_hugepages=1025 00:06:54.974 10:02:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:54.974 resv_hugepages=0 00:06:54.974 10:02:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:54.974 surplus_hugepages=0 00:06:54.974 10:02:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:54.974 anon_hugepages=0 00:06:54.974 10:02:08 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:54.974 10:02:08 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:54.974 10:02:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:54.974 10:02:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:54.974 10:02:08 -- setup/common.sh@18 -- # local node= 00:06:54.974 10:02:08 -- setup/common.sh@19 -- # local var val 00:06:54.974 10:02:08 -- setup/common.sh@20 -- # local mem_f mem 00:06:54.974 10:02:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.974 10:02:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:54.974 10:02:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:54.974 10:02:08 -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.974 10:02:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74720808 kB' 'MemAvailable: 79739784 kB' 'Buffers: 20532 kB' 'Cached: 12146628 kB' 'SwapCached: 0 kB' 'Active: 7961884 kB' 'Inactive: 4745084 kB' 'Active(anon): 7331648 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543124 kB' 'Mapped: 178384 kB' 'Shmem: 6791840 kB' 'KReclaimable: 479840 kB' 'Slab: 880416 kB' 'SReclaimable: 479840 kB' 'SUnreclaim: 400576 kB' 'KernelStack: 16080 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 8721044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209500 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.974 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.974 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.975 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.975 10:02:08 -- setup/common.sh@33 -- # echo 1025 00:06:54.975 10:02:08 -- setup/common.sh@33 -- # return 0 00:06:54.975 10:02:08 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:54.975 10:02:08 -- setup/hugepages.sh@112 -- # get_nodes 00:06:54.975 10:02:08 -- setup/hugepages.sh@27 -- # local node 00:06:54.975 10:02:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:54.975 10:02:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:54.975 10:02:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:54.975 10:02:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:06:54.975 10:02:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:54.975 10:02:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:54.975 10:02:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:54.975 10:02:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:54.975 10:02:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:54.975 10:02:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:54.975 10:02:08 -- setup/common.sh@18 -- # local node=0 00:06:54.975 10:02:08 -- setup/common.sh@19 -- # local var val 00:06:54.975 10:02:08 -- setup/common.sh@20 -- # local mem_f mem 00:06:54.975 10:02:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.975 10:02:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:54.975 10:02:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:54.975 10:02:08 -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.975 10:02:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.975 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116964 kB' 'MemFree: 41490612 kB' 'MemUsed: 6626352 kB' 'SwapCached: 0 kB' 'Active: 2953456 kB' 'Inactive: 607744 kB' 'Active(anon): 2573820 kB' 'Inactive(anon): 0 kB' 'Active(file): 379636 kB' 'Inactive(file): 607744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3190680 kB' 'Mapped: 130508 kB' 'AnonPages: 373732 kB' 'Shmem: 2203300 kB' 'KernelStack: 9992 kB' 'PageTables: 5460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 316892 kB' 'Slab: 547712 kB' 'SReclaimable: 316892 kB' 'SUnreclaim: 230820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.976 10:02:08 -- setup/common.sh@32 -- # continue 00:06:54.976 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@33 -- # echo 0 00:06:55.237 10:02:08 -- setup/common.sh@33 -- # return 0 00:06:55.237 10:02:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:55.237 10:02:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:55.237 10:02:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:55.237 10:02:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:55.237 10:02:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:55.237 10:02:08 -- setup/common.sh@18 -- # local node=1 00:06:55.237 10:02:08 -- setup/common.sh@19 -- # local var val 00:06:55.237 10:02:08 -- setup/common.sh@20 -- # local mem_f mem 00:06:55.237 10:02:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:55.237 10:02:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:55.237 10:02:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:55.237 10:02:08 -- setup/common.sh@28 -- # mapfile -t mem 00:06:55.237 10:02:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176564 kB' 'MemFree: 33229944 kB' 'MemUsed: 10946620 kB' 'SwapCached: 0 kB' 'Active: 5008780 kB' 'Inactive: 4137340 kB' 'Active(anon): 4758180 kB' 'Inactive(anon): 0 kB' 'Active(file): 250600 kB' 'Inactive(file): 4137340 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8976484 kB' 'Mapped: 47876 kB' 'AnonPages: 169792 kB' 'Shmem: 4588544 kB' 'KernelStack: 6104 kB' 'PageTables: 2884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162948 kB' 'Slab: 332704 kB' 'SReclaimable: 162948 kB' 'SUnreclaim: 169756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.237 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.237 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # continue 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # IFS=': ' 00:06:55.238 10:02:08 -- setup/common.sh@31 -- # read -r var val _ 00:06:55.238 10:02:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:55.238 10:02:08 -- setup/common.sh@33 -- # echo 0 00:06:55.238 10:02:08 -- setup/common.sh@33 -- # return 0 00:06:55.238 10:02:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:55.238 10:02:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:55.238 10:02:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:55.238 10:02:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:55.238 10:02:08 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:06:55.238 node0=512 expecting 513 00:06:55.238 10:02:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:55.238 10:02:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:55.238 10:02:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:55.238 10:02:08 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:06:55.238 node1=513 expecting 512 00:06:55.238 10:02:08 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:06:55.238 00:06:55.238 real 0m5.400s 00:06:55.238 user 0m1.904s 00:06:55.238 sys 0m3.542s 00:06:55.238 10:02:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.238 10:02:08 -- common/autotest_common.sh@10 -- # set +x 00:06:55.238 ************************************ 00:06:55.238 END TEST odd_alloc 00:06:55.238 ************************************ 00:06:55.238 10:02:08 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:55.238 10:02:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:55.238 10:02:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.238 10:02:08 -- common/autotest_common.sh@10 -- # set +x 00:06:55.238 ************************************ 00:06:55.238 START TEST custom_alloc 00:06:55.238 ************************************ 00:06:55.238 10:02:08 -- common/autotest_common.sh@1104 -- # custom_alloc 00:06:55.238 10:02:08 -- setup/hugepages.sh@167 -- # local IFS=, 00:06:55.238 10:02:08 -- setup/hugepages.sh@169 -- # local node 00:06:55.238 10:02:08 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:55.238 10:02:08 -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:55.238 10:02:08 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:55.238 10:02:08 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:55.238 10:02:08 -- setup/hugepages.sh@49 -- # local size=1048576 00:06:55.238 10:02:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:55.238 10:02:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:55.238 10:02:08 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:55.238 10:02:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:55.239 10:02:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:55.239 10:02:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:55.239 10:02:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:55.239 10:02:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:55.239 10:02:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:55.239 10:02:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:55.239 10:02:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:55.239 10:02:08 -- setup/hugepages.sh@83 -- # : 256 00:06:55.239 10:02:08 -- setup/hugepages.sh@84 -- # : 1 00:06:55.239 10:02:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:55.239 10:02:08 -- setup/hugepages.sh@83 -- # : 0 00:06:55.239 10:02:08 -- setup/hugepages.sh@84 -- # : 0 00:06:55.239 10:02:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:55.239 10:02:08 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:06:55.239 10:02:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:55.239 10:02:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:55.239 10:02:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:55.239 10:02:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:55.239 10:02:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:55.239 10:02:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:55.239 10:02:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:55.239 10:02:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:55.239 10:02:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:55.239 10:02:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:55.239 10:02:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:55.239 10:02:08 -- setup/hugepages.sh@78 -- # return 0 00:06:55.239 10:02:08 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:06:55.239 10:02:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:55.239 10:02:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:55.239 10:02:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:55.239 10:02:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:55.239 10:02:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:55.239 10:02:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:55.239 10:02:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:55.239 10:02:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:55.239 10:02:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:55.239 10:02:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:55.239 10:02:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:55.239 10:02:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:06:55.239 10:02:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:55.239 10:02:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:55.239 10:02:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:55.239 10:02:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:06:55.239 10:02:08 -- setup/hugepages.sh@78 -- # return 0 00:06:55.239 10:02:08 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:06:55.239 10:02:08 -- setup/hugepages.sh@187 -- # setup output 00:06:55.239 10:02:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:55.239 10:02:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:58.575 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:58.575 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:58.575 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:00.501 10:02:13 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:07:00.501 10:02:13 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:07:00.501 10:02:13 -- setup/hugepages.sh@89 -- # local node 00:07:00.501 10:02:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:00.501 10:02:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:00.501 10:02:13 -- setup/hugepages.sh@92 -- # local surp 00:07:00.501 10:02:13 -- setup/hugepages.sh@93 -- # local resv 00:07:00.501 10:02:13 -- setup/hugepages.sh@94 -- # local anon 00:07:00.501 10:02:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:00.501 10:02:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:00.501 10:02:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:00.501 10:02:13 -- setup/common.sh@18 -- # local node= 00:07:00.501 10:02:13 -- setup/common.sh@19 -- # local var val 00:07:00.501 10:02:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:00.501 10:02:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.501 10:02:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:00.501 10:02:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:00.501 10:02:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.501 10:02:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.501 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.501 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 73684796 kB' 'MemAvailable: 78703804 kB' 'Buffers: 20532 kB' 'Cached: 12146760 kB' 'SwapCached: 0 kB' 'Active: 7963312 kB' 'Inactive: 4745084 kB' 'Active(anon): 7333076 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544036 kB' 'Mapped: 178588 kB' 'Shmem: 6791972 kB' 'KReclaimable: 479872 kB' 'Slab: 880456 kB' 'SReclaimable: 479872 kB' 'SUnreclaim: 400584 kB' 'KernelStack: 16128 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 8721704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209548 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.502 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.502 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.503 10:02:13 -- setup/common.sh@33 -- # echo 0 00:07:00.503 10:02:13 -- setup/common.sh@33 -- # return 0 00:07:00.503 10:02:13 -- setup/hugepages.sh@97 -- # anon=0 00:07:00.503 10:02:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:00.503 10:02:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:00.503 10:02:13 -- setup/common.sh@18 -- # local node= 00:07:00.503 10:02:13 -- setup/common.sh@19 -- # local var val 00:07:00.503 10:02:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:00.503 10:02:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.503 10:02:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:00.503 10:02:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:00.503 10:02:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.503 10:02:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 73685132 kB' 'MemAvailable: 78704140 kB' 'Buffers: 20532 kB' 'Cached: 12146760 kB' 'SwapCached: 0 kB' 'Active: 7963132 kB' 'Inactive: 4745084 kB' 'Active(anon): 7332896 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544248 kB' 'Mapped: 178508 kB' 'Shmem: 6791972 kB' 'KReclaimable: 479872 kB' 'Slab: 880440 kB' 'SReclaimable: 479872 kB' 'SUnreclaim: 400568 kB' 'KernelStack: 16128 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 8721716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209548 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.503 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.503 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.504 10:02:13 -- setup/common.sh@33 -- # echo 0 00:07:00.504 10:02:13 -- setup/common.sh@33 -- # return 0 00:07:00.504 10:02:13 -- setup/hugepages.sh@99 -- # surp=0 00:07:00.504 10:02:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:00.504 10:02:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:00.504 10:02:13 -- setup/common.sh@18 -- # local node= 00:07:00.504 10:02:13 -- setup/common.sh@19 -- # local var val 00:07:00.504 10:02:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:00.504 10:02:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.504 10:02:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:00.504 10:02:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:00.504 10:02:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.504 10:02:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 73686592 kB' 'MemAvailable: 78705600 kB' 'Buffers: 20532 kB' 'Cached: 12146776 kB' 'SwapCached: 0 kB' 'Active: 7963124 kB' 'Inactive: 4745084 kB' 'Active(anon): 7332888 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544260 kB' 'Mapped: 178508 kB' 'Shmem: 6791988 kB' 'KReclaimable: 479872 kB' 'Slab: 880416 kB' 'SReclaimable: 479872 kB' 'SUnreclaim: 400544 kB' 'KernelStack: 16128 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 8721728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209548 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.504 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.504 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.505 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.505 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.505 10:02:13 -- setup/common.sh@33 -- # echo 0 00:07:00.505 10:02:13 -- setup/common.sh@33 -- # return 0 00:07:00.505 10:02:13 -- setup/hugepages.sh@100 -- # resv=0 00:07:00.506 10:02:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:07:00.506 nr_hugepages=1536 00:07:00.506 10:02:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:00.506 resv_hugepages=0 00:07:00.506 10:02:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:00.506 surplus_hugepages=0 00:07:00.506 10:02:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:00.506 anon_hugepages=0 00:07:00.506 10:02:13 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:00.506 10:02:13 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:07:00.506 10:02:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:00.506 10:02:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:00.506 10:02:13 -- setup/common.sh@18 -- # local node= 00:07:00.506 10:02:13 -- setup/common.sh@19 -- # local var val 00:07:00.506 10:02:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:00.506 10:02:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.506 10:02:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:00.506 10:02:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:00.506 10:02:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.506 10:02:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 73686848 kB' 'MemAvailable: 78705856 kB' 'Buffers: 20532 kB' 'Cached: 12146788 kB' 'SwapCached: 0 kB' 'Active: 7963108 kB' 'Inactive: 4745084 kB' 'Active(anon): 7332872 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544256 kB' 'Mapped: 178508 kB' 'Shmem: 6792000 kB' 'KReclaimable: 479872 kB' 'Slab: 880416 kB' 'SReclaimable: 479872 kB' 'SUnreclaim: 400544 kB' 'KernelStack: 16128 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 8721744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209532 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.506 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.506 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.507 10:02:13 -- setup/common.sh@33 -- # echo 1536 00:07:00.507 10:02:13 -- setup/common.sh@33 -- # return 0 00:07:00.507 10:02:13 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:00.507 10:02:13 -- setup/hugepages.sh@112 -- # get_nodes 00:07:00.507 10:02:13 -- setup/hugepages.sh@27 -- # local node 00:07:00.507 10:02:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:00.507 10:02:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:00.507 10:02:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:00.507 10:02:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:00.507 10:02:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:00.507 10:02:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:00.507 10:02:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:00.507 10:02:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:00.507 10:02:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:00.507 10:02:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:00.507 10:02:13 -- setup/common.sh@18 -- # local node=0 00:07:00.507 10:02:13 -- setup/common.sh@19 -- # local var val 00:07:00.507 10:02:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:00.507 10:02:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.507 10:02:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:00.507 10:02:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:00.507 10:02:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.507 10:02:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116964 kB' 'MemFree: 41501408 kB' 'MemUsed: 6615556 kB' 'SwapCached: 0 kB' 'Active: 2953924 kB' 'Inactive: 607744 kB' 'Active(anon): 2574288 kB' 'Inactive(anon): 0 kB' 'Active(file): 379636 kB' 'Inactive(file): 607744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3190796 kB' 'Mapped: 130568 kB' 'AnonPages: 374112 kB' 'Shmem: 2203416 kB' 'KernelStack: 10040 kB' 'PageTables: 5424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 316924 kB' 'Slab: 547744 kB' 'SReclaimable: 316924 kB' 'SUnreclaim: 230820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.507 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.507 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@33 -- # echo 0 00:07:00.508 10:02:13 -- setup/common.sh@33 -- # return 0 00:07:00.508 10:02:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:00.508 10:02:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:00.508 10:02:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:00.508 10:02:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:00.508 10:02:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:00.508 10:02:13 -- setup/common.sh@18 -- # local node=1 00:07:00.508 10:02:13 -- setup/common.sh@19 -- # local var val 00:07:00.508 10:02:13 -- setup/common.sh@20 -- # local mem_f mem 00:07:00.508 10:02:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.508 10:02:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:00.508 10:02:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:00.508 10:02:13 -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.508 10:02:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176564 kB' 'MemFree: 32185440 kB' 'MemUsed: 11991124 kB' 'SwapCached: 0 kB' 'Active: 5009204 kB' 'Inactive: 4137340 kB' 'Active(anon): 4758604 kB' 'Inactive(anon): 0 kB' 'Active(file): 250600 kB' 'Inactive(file): 4137340 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8976536 kB' 'Mapped: 47940 kB' 'AnonPages: 170140 kB' 'Shmem: 4588596 kB' 'KernelStack: 6088 kB' 'PageTables: 2928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162948 kB' 'Slab: 332672 kB' 'SReclaimable: 162948 kB' 'SUnreclaim: 169724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.508 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.508 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # continue 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.509 10:02:13 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.509 10:02:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.509 10:02:13 -- setup/common.sh@33 -- # echo 0 00:07:00.509 10:02:13 -- setup/common.sh@33 -- # return 0 00:07:00.509 10:02:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:00.509 10:02:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:00.509 10:02:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:00.509 10:02:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:00.509 10:02:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:00.509 node0=512 expecting 512 00:07:00.509 10:02:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:00.509 10:02:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:00.509 10:02:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:00.509 10:02:13 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:07:00.509 node1=1024 expecting 1024 00:07:00.509 10:02:13 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:07:00.509 00:07:00.509 real 0m5.218s 00:07:00.509 user 0m1.813s 00:07:00.509 sys 0m3.361s 00:07:00.509 10:02:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.509 10:02:13 -- common/autotest_common.sh@10 -- # set +x 00:07:00.509 ************************************ 00:07:00.509 END TEST custom_alloc 00:07:00.509 ************************************ 00:07:00.509 10:02:13 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:07:00.509 10:02:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:00.509 10:02:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.509 10:02:13 -- common/autotest_common.sh@10 -- # set +x 00:07:00.509 ************************************ 00:07:00.509 START TEST no_shrink_alloc 00:07:00.509 ************************************ 00:07:00.509 10:02:13 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:07:00.509 10:02:13 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:07:00.509 10:02:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:07:00.510 10:02:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:00.510 10:02:13 -- setup/hugepages.sh@51 -- # shift 00:07:00.510 10:02:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:00.510 10:02:13 -- setup/hugepages.sh@52 -- # local node_ids 00:07:00.510 10:02:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:00.510 10:02:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:00.510 10:02:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:00.510 10:02:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:00.510 10:02:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:00.510 10:02:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:00.510 10:02:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:00.510 10:02:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:00.510 10:02:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:00.510 10:02:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:00.510 10:02:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:00.510 10:02:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:00.510 10:02:13 -- setup/hugepages.sh@73 -- # return 0 00:07:00.510 10:02:13 -- setup/hugepages.sh@198 -- # setup output 00:07:00.510 10:02:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:00.510 10:02:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:07:03.799 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:03.799 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:03.799 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:05.710 10:02:18 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:07:05.710 10:02:18 -- setup/hugepages.sh@89 -- # local node 00:07:05.710 10:02:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:05.710 10:02:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:05.710 10:02:18 -- setup/hugepages.sh@92 -- # local surp 00:07:05.710 10:02:18 -- setup/hugepages.sh@93 -- # local resv 00:07:05.710 10:02:18 -- setup/hugepages.sh@94 -- # local anon 00:07:05.710 10:02:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:05.710 10:02:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:05.710 10:02:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:05.710 10:02:18 -- setup/common.sh@18 -- # local node= 00:07:05.710 10:02:18 -- setup/common.sh@19 -- # local var val 00:07:05.710 10:02:18 -- setup/common.sh@20 -- # local mem_f mem 00:07:05.710 10:02:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:05.710 10:02:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:05.710 10:02:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:05.710 10:02:18 -- setup/common.sh@28 -- # mapfile -t mem 00:07:05.710 10:02:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.710 10:02:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74672776 kB' 'MemAvailable: 79691784 kB' 'Buffers: 20532 kB' 'Cached: 12146912 kB' 'SwapCached: 0 kB' 'Active: 7965320 kB' 'Inactive: 4745084 kB' 'Active(anon): 7335084 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545776 kB' 'Mapped: 178600 kB' 'Shmem: 6792124 kB' 'KReclaimable: 479872 kB' 'Slab: 880800 kB' 'SReclaimable: 479872 kB' 'SUnreclaim: 400928 kB' 'KernelStack: 16096 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8722380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209628 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.710 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.710 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:05.711 10:02:18 -- setup/common.sh@33 -- # echo 0 00:07:05.711 10:02:18 -- setup/common.sh@33 -- # return 0 00:07:05.711 10:02:18 -- setup/hugepages.sh@97 -- # anon=0 00:07:05.711 10:02:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:05.711 10:02:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:05.711 10:02:18 -- setup/common.sh@18 -- # local node= 00:07:05.711 10:02:18 -- setup/common.sh@19 -- # local var val 00:07:05.711 10:02:18 -- setup/common.sh@20 -- # local mem_f mem 00:07:05.711 10:02:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:05.711 10:02:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:05.711 10:02:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:05.711 10:02:18 -- setup/common.sh@28 -- # mapfile -t mem 00:07:05.711 10:02:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74673104 kB' 'MemAvailable: 79692112 kB' 'Buffers: 20532 kB' 'Cached: 12146912 kB' 'SwapCached: 0 kB' 'Active: 7964612 kB' 'Inactive: 4745084 kB' 'Active(anon): 7334376 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545512 kB' 'Mapped: 178520 kB' 'Shmem: 6792124 kB' 'KReclaimable: 479872 kB' 'Slab: 880772 kB' 'SReclaimable: 479872 kB' 'SUnreclaim: 400900 kB' 'KernelStack: 16096 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8722392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209612 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.711 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.711 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.712 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.712 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.713 10:02:18 -- setup/common.sh@33 -- # echo 0 00:07:05.713 10:02:18 -- setup/common.sh@33 -- # return 0 00:07:05.713 10:02:18 -- setup/hugepages.sh@99 -- # surp=0 00:07:05.713 10:02:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:05.713 10:02:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:05.713 10:02:18 -- setup/common.sh@18 -- # local node= 00:07:05.713 10:02:18 -- setup/common.sh@19 -- # local var val 00:07:05.713 10:02:18 -- setup/common.sh@20 -- # local mem_f mem 00:07:05.713 10:02:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:05.713 10:02:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:05.713 10:02:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:05.713 10:02:18 -- setup/common.sh@28 -- # mapfile -t mem 00:07:05.713 10:02:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:05.713 10:02:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74674324 kB' 'MemAvailable: 79693332 kB' 'Buffers: 20532 kB' 'Cached: 12146924 kB' 'SwapCached: 0 kB' 'Active: 7964636 kB' 'Inactive: 4745084 kB' 'Active(anon): 7334400 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545512 kB' 'Mapped: 178520 kB' 'Shmem: 6792136 kB' 'KReclaimable: 479872 kB' 'Slab: 880772 kB' 'SReclaimable: 479872 kB' 'SUnreclaim: 400900 kB' 'KernelStack: 16096 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8722404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209612 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.713 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.713 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:05.714 10:02:18 -- setup/common.sh@33 -- # echo 0 00:07:05.714 10:02:18 -- setup/common.sh@33 -- # return 0 00:07:05.714 10:02:18 -- setup/hugepages.sh@100 -- # resv=0 00:07:05.714 10:02:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:05.714 nr_hugepages=1024 00:07:05.714 10:02:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:05.714 resv_hugepages=0 00:07:05.714 10:02:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:05.714 surplus_hugepages=0 00:07:05.714 10:02:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:05.714 anon_hugepages=0 00:07:05.714 10:02:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:05.714 10:02:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:05.714 10:02:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:05.714 10:02:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:05.714 10:02:18 -- setup/common.sh@18 -- # local node= 00:07:05.714 10:02:18 -- setup/common.sh@19 -- # local var val 00:07:05.714 10:02:18 -- setup/common.sh@20 -- # local mem_f mem 00:07:05.714 10:02:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:05.714 10:02:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:05.714 10:02:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:05.714 10:02:18 -- setup/common.sh@28 -- # mapfile -t mem 00:07:05.714 10:02:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74674324 kB' 'MemAvailable: 79693332 kB' 'Buffers: 20532 kB' 'Cached: 12146936 kB' 'SwapCached: 0 kB' 'Active: 7964204 kB' 'Inactive: 4745084 kB' 'Active(anon): 7333968 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545052 kB' 'Mapped: 178520 kB' 'Shmem: 6792148 kB' 'KReclaimable: 479872 kB' 'Slab: 880772 kB' 'SReclaimable: 479872 kB' 'SUnreclaim: 400900 kB' 'KernelStack: 16080 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8722420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209628 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.714 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.714 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.715 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.715 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:05.716 10:02:18 -- setup/common.sh@33 -- # echo 1024 00:07:05.716 10:02:18 -- setup/common.sh@33 -- # return 0 00:07:05.716 10:02:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:05.716 10:02:18 -- setup/hugepages.sh@112 -- # get_nodes 00:07:05.716 10:02:18 -- setup/hugepages.sh@27 -- # local node 00:07:05.716 10:02:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:05.716 10:02:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:05.716 10:02:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:05.716 10:02:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:05.716 10:02:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:05.716 10:02:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:05.716 10:02:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:05.716 10:02:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:05.716 10:02:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:05.716 10:02:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:05.716 10:02:18 -- setup/common.sh@18 -- # local node=0 00:07:05.716 10:02:18 -- setup/common.sh@19 -- # local var val 00:07:05.716 10:02:18 -- setup/common.sh@20 -- # local mem_f mem 00:07:05.716 10:02:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:05.716 10:02:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:05.716 10:02:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:05.716 10:02:18 -- setup/common.sh@28 -- # mapfile -t mem 00:07:05.716 10:02:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116964 kB' 'MemFree: 40438936 kB' 'MemUsed: 7678028 kB' 'SwapCached: 0 kB' 'Active: 2954408 kB' 'Inactive: 607744 kB' 'Active(anon): 2574772 kB' 'Inactive(anon): 0 kB' 'Active(file): 379636 kB' 'Inactive(file): 607744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3190920 kB' 'Mapped: 130512 kB' 'AnonPages: 374364 kB' 'Shmem: 2203540 kB' 'KernelStack: 10008 kB' 'PageTables: 5324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 316924 kB' 'Slab: 547948 kB' 'SReclaimable: 316924 kB' 'SUnreclaim: 231024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.716 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.716 10:02:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # continue 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:05.717 10:02:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:05.717 10:02:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:05.717 10:02:18 -- setup/common.sh@33 -- # echo 0 00:07:05.717 10:02:18 -- setup/common.sh@33 -- # return 0 00:07:05.717 10:02:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:05.717 10:02:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:05.717 10:02:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:05.717 10:02:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:05.717 10:02:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:05.717 node0=1024 expecting 1024 00:07:05.717 10:02:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:05.717 10:02:18 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:05.717 10:02:18 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:05.717 10:02:18 -- setup/hugepages.sh@202 -- # setup output 00:07:05.717 10:02:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:05.717 10:02:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:07:09.006 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:09.006 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:09.006 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:11.547 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:11.547 10:02:24 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:11.547 10:02:24 -- setup/hugepages.sh@89 -- # local node 00:07:11.547 10:02:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:11.547 10:02:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:11.547 10:02:24 -- setup/hugepages.sh@92 -- # local surp 00:07:11.547 10:02:24 -- setup/hugepages.sh@93 -- # local resv 00:07:11.547 10:02:24 -- setup/hugepages.sh@94 -- # local anon 00:07:11.547 10:02:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:11.547 10:02:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:11.547 10:02:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:11.547 10:02:24 -- setup/common.sh@18 -- # local node= 00:07:11.547 10:02:24 -- setup/common.sh@19 -- # local var val 00:07:11.547 10:02:24 -- setup/common.sh@20 -- # local mem_f mem 00:07:11.547 10:02:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.547 10:02:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.547 10:02:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.547 10:02:24 -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.547 10:02:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74727304 kB' 'MemAvailable: 79746312 kB' 'Buffers: 20532 kB' 'Cached: 12147052 kB' 'SwapCached: 0 kB' 'Active: 7967100 kB' 'Inactive: 4745084 kB' 'Active(anon): 7336864 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547544 kB' 'Mapped: 178684 kB' 'Shmem: 6792264 kB' 'KReclaimable: 479872 kB' 'Slab: 881340 kB' 'SReclaimable: 479872 kB' 'SUnreclaim: 401468 kB' 'KernelStack: 16144 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8722568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209500 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.547 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.547 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.548 10:02:24 -- setup/common.sh@33 -- # echo 0 00:07:11.548 10:02:24 -- setup/common.sh@33 -- # return 0 00:07:11.548 10:02:24 -- setup/hugepages.sh@97 -- # anon=0 00:07:11.548 10:02:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:11.548 10:02:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.548 10:02:24 -- setup/common.sh@18 -- # local node= 00:07:11.548 10:02:24 -- setup/common.sh@19 -- # local var val 00:07:11.548 10:02:24 -- setup/common.sh@20 -- # local mem_f mem 00:07:11.548 10:02:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.548 10:02:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.548 10:02:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.548 10:02:24 -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.548 10:02:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74735876 kB' 'MemAvailable: 79754876 kB' 'Buffers: 20532 kB' 'Cached: 12147052 kB' 'SwapCached: 0 kB' 'Active: 7966208 kB' 'Inactive: 4745084 kB' 'Active(anon): 7335972 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547108 kB' 'Mapped: 178576 kB' 'Shmem: 6792264 kB' 'KReclaimable: 479864 kB' 'Slab: 881260 kB' 'SReclaimable: 479864 kB' 'SUnreclaim: 401396 kB' 'KernelStack: 16144 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8722580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209484 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.548 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.548 10:02:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.549 10:02:24 -- setup/common.sh@33 -- # echo 0 00:07:11.549 10:02:24 -- setup/common.sh@33 -- # return 0 00:07:11.549 10:02:24 -- setup/hugepages.sh@99 -- # surp=0 00:07:11.549 10:02:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:11.549 10:02:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:11.549 10:02:24 -- setup/common.sh@18 -- # local node= 00:07:11.549 10:02:24 -- setup/common.sh@19 -- # local var val 00:07:11.549 10:02:24 -- setup/common.sh@20 -- # local mem_f mem 00:07:11.549 10:02:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.549 10:02:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.549 10:02:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.549 10:02:24 -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.549 10:02:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74736884 kB' 'MemAvailable: 79755884 kB' 'Buffers: 20532 kB' 'Cached: 12147064 kB' 'SwapCached: 0 kB' 'Active: 7966384 kB' 'Inactive: 4745084 kB' 'Active(anon): 7336148 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547272 kB' 'Mapped: 178576 kB' 'Shmem: 6792276 kB' 'KReclaimable: 479864 kB' 'Slab: 881260 kB' 'SReclaimable: 479864 kB' 'SUnreclaim: 401396 kB' 'KernelStack: 16144 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8722592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209500 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.549 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.549 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.550 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.550 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.551 10:02:24 -- setup/common.sh@33 -- # echo 0 00:07:11.551 10:02:24 -- setup/common.sh@33 -- # return 0 00:07:11.551 10:02:24 -- setup/hugepages.sh@100 -- # resv=0 00:07:11.551 10:02:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:11.551 nr_hugepages=1024 00:07:11.551 10:02:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:11.551 resv_hugepages=0 00:07:11.551 10:02:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:11.551 surplus_hugepages=0 00:07:11.551 10:02:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:11.551 anon_hugepages=0 00:07:11.551 10:02:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:11.551 10:02:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:11.551 10:02:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:11.551 10:02:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:11.551 10:02:24 -- setup/common.sh@18 -- # local node= 00:07:11.551 10:02:24 -- setup/common.sh@19 -- # local var val 00:07:11.551 10:02:24 -- setup/common.sh@20 -- # local mem_f mem 00:07:11.551 10:02:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.551 10:02:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.551 10:02:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.551 10:02:24 -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.551 10:02:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74742244 kB' 'MemAvailable: 79761244 kB' 'Buffers: 20532 kB' 'Cached: 12147064 kB' 'SwapCached: 0 kB' 'Active: 7966580 kB' 'Inactive: 4745084 kB' 'Active(anon): 7336344 kB' 'Inactive(anon): 0 kB' 'Active(file): 630236 kB' 'Inactive(file): 4745084 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547456 kB' 'Mapped: 178576 kB' 'Shmem: 6792276 kB' 'KReclaimable: 479864 kB' 'Slab: 881236 kB' 'SReclaimable: 479864 kB' 'SUnreclaim: 401372 kB' 'KernelStack: 16144 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8722608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 209500 kB' 'VmallocChunk: 0 kB' 'Percpu: 62080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 460220 kB' 'DirectMap2M: 8652800 kB' 'DirectMap1G: 92274688 kB' 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.551 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.551 10:02:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.552 10:02:24 -- setup/common.sh@33 -- # echo 1024 00:07:11.552 10:02:24 -- setup/common.sh@33 -- # return 0 00:07:11.552 10:02:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:11.552 10:02:24 -- setup/hugepages.sh@112 -- # get_nodes 00:07:11.552 10:02:24 -- setup/hugepages.sh@27 -- # local node 00:07:11.552 10:02:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:11.552 10:02:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:11.552 10:02:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:11.552 10:02:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:11.552 10:02:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:11.552 10:02:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:11.552 10:02:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:11.552 10:02:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:11.552 10:02:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:11.552 10:02:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.552 10:02:24 -- setup/common.sh@18 -- # local node=0 00:07:11.552 10:02:24 -- setup/common.sh@19 -- # local var val 00:07:11.552 10:02:24 -- setup/common.sh@20 -- # local mem_f mem 00:07:11.552 10:02:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.552 10:02:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:11.552 10:02:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:11.552 10:02:24 -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.552 10:02:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.552 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.552 10:02:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116964 kB' 'MemFree: 40447596 kB' 'MemUsed: 7669368 kB' 'SwapCached: 0 kB' 'Active: 2954772 kB' 'Inactive: 607744 kB' 'Active(anon): 2575136 kB' 'Inactive(anon): 0 kB' 'Active(file): 379636 kB' 'Inactive(file): 607744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3190980 kB' 'Mapped: 130512 kB' 'AnonPages: 374792 kB' 'Shmem: 2203600 kB' 'KernelStack: 10040 kB' 'PageTables: 5404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 316916 kB' 'Slab: 547600 kB' 'SReclaimable: 316916 kB' 'SUnreclaim: 230684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # continue 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.553 10:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.553 10:02:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.553 10:02:24 -- setup/common.sh@33 -- # echo 0 00:07:11.553 10:02:24 -- setup/common.sh@33 -- # return 0 00:07:11.553 10:02:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:11.553 10:02:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:11.553 10:02:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:11.553 10:02:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:11.553 10:02:24 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:11.553 node0=1024 expecting 1024 00:07:11.553 10:02:24 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:11.553 00:07:11.553 real 0m10.933s 00:07:11.553 user 0m3.954s 00:07:11.553 sys 0m6.942s 00:07:11.553 10:02:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.553 10:02:24 -- common/autotest_common.sh@10 -- # set +x 00:07:11.553 ************************************ 00:07:11.553 END TEST no_shrink_alloc 00:07:11.553 ************************************ 00:07:11.553 10:02:24 -- setup/hugepages.sh@217 -- # clear_hp 00:07:11.553 10:02:24 -- setup/hugepages.sh@37 -- # local node hp 00:07:11.553 10:02:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:11.553 10:02:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:11.554 10:02:24 -- setup/hugepages.sh@41 -- # echo 0 00:07:11.554 10:02:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:11.554 10:02:24 -- setup/hugepages.sh@41 -- # echo 0 00:07:11.554 10:02:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:11.554 10:02:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:11.554 10:02:24 -- setup/hugepages.sh@41 -- # echo 0 00:07:11.554 10:02:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:11.554 10:02:24 -- setup/hugepages.sh@41 -- # echo 0 00:07:11.554 10:02:24 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:11.554 10:02:24 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:11.554 00:07:11.554 real 0m41.583s 00:07:11.554 user 0m13.449s 00:07:11.554 sys 0m25.090s 00:07:11.554 10:02:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.554 10:02:24 -- common/autotest_common.sh@10 -- # set +x 00:07:11.554 ************************************ 00:07:11.554 END TEST hugepages 00:07:11.554 ************************************ 00:07:11.554 10:02:24 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:07:11.554 10:02:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.554 10:02:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.554 10:02:24 -- common/autotest_common.sh@10 -- # set +x 00:07:11.554 ************************************ 00:07:11.554 START TEST driver 00:07:11.554 ************************************ 00:07:11.554 10:02:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:07:11.554 * Looking for test storage... 00:07:11.554 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:07:11.554 10:02:24 -- setup/driver.sh@68 -- # setup reset 00:07:11.554 10:02:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:11.554 10:02:24 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:07:18.118 10:02:31 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:18.118 10:02:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.118 10:02:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.118 10:02:31 -- common/autotest_common.sh@10 -- # set +x 00:07:18.118 ************************************ 00:07:18.118 START TEST guess_driver 00:07:18.118 ************************************ 00:07:18.118 10:02:31 -- common/autotest_common.sh@1104 -- # guess_driver 00:07:18.118 10:02:31 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:18.118 10:02:31 -- setup/driver.sh@47 -- # local fail=0 00:07:18.118 10:02:31 -- setup/driver.sh@49 -- # pick_driver 00:07:18.118 10:02:31 -- setup/driver.sh@36 -- # vfio 00:07:18.118 10:02:31 -- setup/driver.sh@21 -- # local iommu_grups 00:07:18.118 10:02:31 -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:18.118 10:02:31 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:18.118 10:02:31 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:07:18.118 10:02:31 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:18.118 10:02:31 -- setup/driver.sh@29 -- # (( 190 > 0 )) 00:07:18.118 10:02:31 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:07:18.118 10:02:31 -- setup/driver.sh@14 -- # mod vfio_pci 00:07:18.118 10:02:31 -- setup/driver.sh@12 -- # dep vfio_pci 00:07:18.118 10:02:31 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:07:18.118 10:02:31 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:07:18.118 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:18.118 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:18.118 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:18.118 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:18.118 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:07:18.118 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:07:18.118 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:07:18.118 10:02:31 -- setup/driver.sh@30 -- # return 0 00:07:18.118 10:02:31 -- setup/driver.sh@37 -- # echo vfio-pci 00:07:18.118 10:02:31 -- setup/driver.sh@49 -- # driver=vfio-pci 00:07:18.118 10:02:31 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:18.118 10:02:31 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:07:18.118 Looking for driver=vfio-pci 00:07:18.118 10:02:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:18.118 10:02:31 -- setup/driver.sh@45 -- # setup output config 00:07:18.118 10:02:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:18.118 10:02:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:21.408 10:02:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:21.408 10:02:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:21.408 10:02:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:21.408 10:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.701 10:02:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.701 10:02:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.701 10:02:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:26.080 10:02:39 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:26.080 10:02:39 -- setup/driver.sh@65 -- # setup reset 00:07:26.080 10:02:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:26.080 10:02:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:07:32.648 00:07:32.648 real 0m14.672s 00:07:32.648 user 0m3.427s 00:07:32.648 sys 0m7.254s 00:07:32.648 10:02:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.648 10:02:45 -- common/autotest_common.sh@10 -- # set +x 00:07:32.648 ************************************ 00:07:32.648 END TEST guess_driver 00:07:32.648 ************************************ 00:07:32.648 00:07:32.648 real 0m21.267s 00:07:32.648 user 0m5.199s 00:07:32.648 sys 0m11.169s 00:07:32.648 10:02:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.648 10:02:45 -- common/autotest_common.sh@10 -- # set +x 00:07:32.648 ************************************ 00:07:32.648 END TEST driver 00:07:32.648 ************************************ 00:07:32.648 10:02:45 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:07:32.648 10:02:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:32.648 10:02:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.648 10:02:45 -- common/autotest_common.sh@10 -- # set +x 00:07:32.648 ************************************ 00:07:32.648 START TEST devices 00:07:32.648 ************************************ 00:07:32.648 10:02:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:07:32.908 * Looking for test storage... 00:07:32.908 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:07:32.908 10:02:45 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:32.908 10:02:45 -- setup/devices.sh@192 -- # setup reset 00:07:32.908 10:02:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:32.908 10:02:45 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:07:38.183 10:02:51 -- setup/devices.sh@194 -- # get_zoned_devs 00:07:38.183 10:02:51 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:07:38.183 10:02:51 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:07:38.183 10:02:51 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:07:38.183 10:02:51 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:07:38.183 10:02:51 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:07:38.183 10:02:51 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:07:38.183 10:02:51 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:38.183 10:02:51 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:07:38.183 10:02:51 -- setup/devices.sh@196 -- # blocks=() 00:07:38.183 10:02:51 -- setup/devices.sh@196 -- # declare -a blocks 00:07:38.183 10:02:51 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:38.183 10:02:51 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:38.183 10:02:51 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:38.183 10:02:51 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:38.183 10:02:51 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:38.183 10:02:51 -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:38.183 10:02:51 -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:07:38.183 10:02:51 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:07:38.183 10:02:51 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:38.183 10:02:51 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:07:38.183 10:02:51 -- scripts/common.sh@389 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:07:38.183 No valid GPT data, bailing 00:07:38.183 10:02:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:38.183 10:02:51 -- scripts/common.sh@393 -- # pt= 00:07:38.183 10:02:51 -- scripts/common.sh@394 -- # return 1 00:07:38.183 10:02:51 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:38.183 10:02:51 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:38.183 10:02:51 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:38.183 10:02:51 -- setup/common.sh@80 -- # echo 4000787030016 00:07:38.183 10:02:51 -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:07:38.183 10:02:51 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:38.183 10:02:51 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:07:38.183 10:02:51 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:07:38.183 10:02:51 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:38.183 10:02:51 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:38.183 10:02:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.183 10:02:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.183 10:02:51 -- common/autotest_common.sh@10 -- # set +x 00:07:38.183 ************************************ 00:07:38.183 START TEST nvme_mount 00:07:38.183 ************************************ 00:07:38.183 10:02:51 -- common/autotest_common.sh@1104 -- # nvme_mount 00:07:38.183 10:02:51 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:38.183 10:02:51 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:38.183 10:02:51 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:38.183 10:02:51 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:38.183 10:02:51 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:38.183 10:02:51 -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:38.183 10:02:51 -- setup/common.sh@40 -- # local part_no=1 00:07:38.183 10:02:51 -- setup/common.sh@41 -- # local size=1073741824 00:07:38.183 10:02:51 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:38.183 10:02:51 -- setup/common.sh@44 -- # parts=() 00:07:38.183 10:02:51 -- setup/common.sh@44 -- # local parts 00:07:38.183 10:02:51 -- setup/common.sh@46 -- # (( part = 1 )) 00:07:38.183 10:02:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:38.183 10:02:51 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:38.183 10:02:51 -- setup/common.sh@46 -- # (( part++ )) 00:07:38.183 10:02:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:38.183 10:02:51 -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:38.183 10:02:51 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:38.183 10:02:51 -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:39.641 Creating new GPT entries in memory. 00:07:39.641 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:39.642 other utilities. 00:07:39.642 10:02:52 -- setup/common.sh@57 -- # (( part = 1 )) 00:07:39.642 10:02:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:39.642 10:02:52 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:39.642 10:02:52 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:39.642 10:02:52 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:40.580 Creating new GPT entries in memory. 00:07:40.580 The operation has completed successfully. 00:07:40.580 10:02:53 -- setup/common.sh@57 -- # (( part++ )) 00:07:40.580 10:02:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:40.580 10:02:53 -- setup/common.sh@62 -- # wait 1130999 00:07:40.580 10:02:53 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:40.580 10:02:53 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:07:40.580 10:02:53 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:40.580 10:02:53 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:40.580 10:02:53 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:40.580 10:02:53 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:40.580 10:02:53 -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:40.580 10:02:53 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:07:40.580 10:02:53 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:40.580 10:02:53 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:40.580 10:02:53 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:40.580 10:02:53 -- setup/devices.sh@53 -- # local found=0 00:07:40.580 10:02:53 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:40.580 10:02:53 -- setup/devices.sh@56 -- # : 00:07:40.580 10:02:53 -- setup/devices.sh@59 -- # local pci status 00:07:40.580 10:02:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.580 10:02:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:07:40.580 10:02:53 -- setup/devices.sh@47 -- # setup output config 00:07:40.580 10:02:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:40.580 10:02:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:43.871 10:02:56 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:56 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:43.871 10:02:56 -- setup/devices.sh@63 -- # found=1 00:07:43.871 10:02:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.871 10:02:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:43.871 10:02:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:46.408 10:02:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:46.408 10:02:59 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:46.408 10:02:59 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:46.408 10:02:59 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:46.408 10:02:59 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:46.408 10:02:59 -- setup/devices.sh@110 -- # cleanup_nvme 00:07:46.408 10:02:59 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:46.408 10:02:59 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:46.408 10:02:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:46.408 10:02:59 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:46.408 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:46.408 10:02:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:46.408 10:02:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:46.408 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:46.408 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:07:46.408 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:46.408 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:46.408 10:02:59 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:07:46.408 10:02:59 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:07:46.408 10:02:59 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:46.408 10:02:59 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:46.408 10:02:59 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:46.408 10:02:59 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:46.408 10:02:59 -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:46.408 10:02:59 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:07:46.408 10:02:59 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:46.409 10:02:59 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:46.409 10:02:59 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:46.409 10:02:59 -- setup/devices.sh@53 -- # local found=0 00:07:46.409 10:02:59 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:46.409 10:02:59 -- setup/devices.sh@56 -- # : 00:07:46.409 10:02:59 -- setup/devices.sh@59 -- # local pci status 00:07:46.409 10:02:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:46.409 10:02:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:07:46.409 10:02:59 -- setup/devices.sh@47 -- # setup output config 00:07:46.409 10:02:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:46.409 10:02:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:49.734 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.734 10:03:02 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:49.734 10:03:02 -- setup/devices.sh@63 -- # found=1 00:07:49.734 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.734 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.734 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.735 10:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:49.735 10:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:52.267 10:03:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:52.267 10:03:04 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:52.267 10:03:04 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:52.267 10:03:04 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:52.267 10:03:04 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:52.267 10:03:04 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:52.267 10:03:04 -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:07:52.267 10:03:04 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:07:52.267 10:03:04 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:52.267 10:03:04 -- setup/devices.sh@50 -- # local mount_point= 00:07:52.267 10:03:04 -- setup/devices.sh@51 -- # local test_file= 00:07:52.267 10:03:04 -- setup/devices.sh@53 -- # local found=0 00:07:52.267 10:03:04 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:52.268 10:03:04 -- setup/devices.sh@59 -- # local pci status 00:07:52.268 10:03:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:52.268 10:03:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:07:52.268 10:03:04 -- setup/devices.sh@47 -- # setup output config 00:07:52.268 10:03:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:52.268 10:03:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:55.554 10:03:08 -- setup/devices.sh@63 -- # found=1 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.554 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:55.554 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.459 10:03:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:57.459 10:03:10 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:57.459 10:03:10 -- setup/devices.sh@68 -- # return 0 00:07:57.459 10:03:10 -- setup/devices.sh@128 -- # cleanup_nvme 00:07:57.459 10:03:10 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:57.459 10:03:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:57.459 10:03:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:57.459 10:03:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:57.459 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:57.459 00:07:57.459 real 0m19.232s 00:07:57.459 user 0m5.808s 00:07:57.459 sys 0m10.917s 00:07:57.459 10:03:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.459 10:03:10 -- common/autotest_common.sh@10 -- # set +x 00:07:57.459 ************************************ 00:07:57.459 END TEST nvme_mount 00:07:57.459 ************************************ 00:07:57.459 10:03:10 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:57.459 10:03:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.459 10:03:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.459 10:03:10 -- common/autotest_common.sh@10 -- # set +x 00:07:57.459 ************************************ 00:07:57.459 START TEST dm_mount 00:07:57.459 ************************************ 00:07:57.459 10:03:10 -- common/autotest_common.sh@1104 -- # dm_mount 00:07:57.459 10:03:10 -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:57.459 10:03:10 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:57.459 10:03:10 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:57.459 10:03:10 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:57.459 10:03:10 -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:57.459 10:03:10 -- setup/common.sh@40 -- # local part_no=2 00:07:57.459 10:03:10 -- setup/common.sh@41 -- # local size=1073741824 00:07:57.460 10:03:10 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:57.460 10:03:10 -- setup/common.sh@44 -- # parts=() 00:07:57.460 10:03:10 -- setup/common.sh@44 -- # local parts 00:07:57.460 10:03:10 -- setup/common.sh@46 -- # (( part = 1 )) 00:07:57.460 10:03:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:57.460 10:03:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:57.460 10:03:10 -- setup/common.sh@46 -- # (( part++ )) 00:07:57.460 10:03:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:57.460 10:03:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:57.460 10:03:10 -- setup/common.sh@46 -- # (( part++ )) 00:07:57.460 10:03:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:57.460 10:03:10 -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:57.460 10:03:10 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:57.460 10:03:10 -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:58.841 Creating new GPT entries in memory. 00:07:58.841 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:58.841 other utilities. 00:07:58.841 10:03:11 -- setup/common.sh@57 -- # (( part = 1 )) 00:07:58.841 10:03:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:58.841 10:03:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:58.841 10:03:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:58.841 10:03:11 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:59.778 Creating new GPT entries in memory. 00:07:59.778 The operation has completed successfully. 00:07:59.778 10:03:12 -- setup/common.sh@57 -- # (( part++ )) 00:07:59.778 10:03:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:59.778 10:03:12 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:59.778 10:03:12 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:59.778 10:03:12 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:08:00.718 The operation has completed successfully. 00:08:00.718 10:03:13 -- setup/common.sh@57 -- # (( part++ )) 00:08:00.718 10:03:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:00.718 10:03:13 -- setup/common.sh@62 -- # wait 1136525 00:08:00.718 10:03:13 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:08:00.718 10:03:13 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:08:00.718 10:03:13 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:00.718 10:03:13 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:08:00.718 10:03:13 -- setup/devices.sh@160 -- # for t in {1..5} 00:08:00.718 10:03:13 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:00.718 10:03:13 -- setup/devices.sh@161 -- # break 00:08:00.718 10:03:13 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:00.718 10:03:13 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:08:00.718 10:03:13 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:08:00.718 10:03:13 -- setup/devices.sh@166 -- # dm=dm-0 00:08:00.718 10:03:13 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:08:00.718 10:03:13 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:08:00.718 10:03:13 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:08:00.718 10:03:13 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:08:00.718 10:03:13 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:08:00.718 10:03:13 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:00.718 10:03:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:08:00.718 10:03:13 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:08:00.718 10:03:13 -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:00.718 10:03:13 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:08:00.718 10:03:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:08:00.718 10:03:13 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:08:00.718 10:03:13 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:00.718 10:03:13 -- setup/devices.sh@53 -- # local found=0 00:08:00.718 10:03:13 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:08:00.718 10:03:13 -- setup/devices.sh@56 -- # : 00:08:00.718 10:03:13 -- setup/devices.sh@59 -- # local pci status 00:08:00.718 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:00.718 10:03:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:08:00.718 10:03:13 -- setup/devices.sh@47 -- # setup output config 00:08:00.718 10:03:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:00.718 10:03:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:08:04.911 10:03:17 -- setup/devices.sh@63 -- # found=1 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.911 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:04.911 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.289 10:03:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:06.289 10:03:19 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:08:06.289 10:03:19 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:08:06.289 10:03:19 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:08:06.289 10:03:19 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:06.289 10:03:19 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:08:06.289 10:03:19 -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:08:06.289 10:03:19 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:08:06.289 10:03:19 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:08:06.289 10:03:19 -- setup/devices.sh@50 -- # local mount_point= 00:08:06.289 10:03:19 -- setup/devices.sh@51 -- # local test_file= 00:08:06.289 10:03:19 -- setup/devices.sh@53 -- # local found=0 00:08:06.289 10:03:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:06.289 10:03:19 -- setup/devices.sh@59 -- # local pci status 00:08:06.289 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.289 10:03:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:08:06.289 10:03:19 -- setup/devices.sh@47 -- # setup output config 00:08:06.289 10:03:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:06.289 10:03:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:08:09.579 10:03:22 -- setup/devices.sh@63 -- # found=1 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:09.579 10:03:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:09.579 10:03:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.485 10:03:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:11.485 10:03:24 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:11.485 10:03:24 -- setup/devices.sh@68 -- # return 0 00:08:11.485 10:03:24 -- setup/devices.sh@187 -- # cleanup_dm 00:08:11.485 10:03:24 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:08:11.485 10:03:24 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:11.485 10:03:24 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:08:11.485 10:03:24 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:11.485 10:03:24 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:08:11.485 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:11.485 10:03:24 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:11.486 10:03:24 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:08:11.486 00:08:11.486 real 0m13.857s 00:08:11.486 user 0m3.504s 00:08:11.486 sys 0m7.258s 00:08:11.486 10:03:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.486 10:03:24 -- common/autotest_common.sh@10 -- # set +x 00:08:11.486 ************************************ 00:08:11.486 END TEST dm_mount 00:08:11.486 ************************************ 00:08:11.486 10:03:24 -- setup/devices.sh@1 -- # cleanup 00:08:11.486 10:03:24 -- setup/devices.sh@11 -- # cleanup_nvme 00:08:11.486 10:03:24 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:08:11.486 10:03:24 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:11.486 10:03:24 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:11.486 10:03:24 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:11.486 10:03:24 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:11.745 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:08:11.745 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:08:11.745 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:11.745 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:11.745 10:03:24 -- setup/devices.sh@12 -- # cleanup_dm 00:08:11.745 10:03:24 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:08:11.745 10:03:24 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:11.745 10:03:24 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:11.745 10:03:24 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:11.745 10:03:24 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:08:11.745 10:03:24 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:08:11.745 00:08:11.745 real 0m39.035s 00:08:11.745 user 0m10.954s 00:08:11.745 sys 0m22.166s 00:08:11.745 10:03:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.745 10:03:24 -- common/autotest_common.sh@10 -- # set +x 00:08:11.745 ************************************ 00:08:11.745 END TEST devices 00:08:11.745 ************************************ 00:08:11.745 00:08:11.745 real 2m17.111s 00:08:11.745 user 0m39.956s 00:08:11.745 sys 1m19.348s 00:08:11.745 10:03:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.745 10:03:24 -- common/autotest_common.sh@10 -- # set +x 00:08:11.745 ************************************ 00:08:11.745 END TEST setup.sh 00:08:11.745 ************************************ 00:08:11.745 10:03:25 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:08:15.112 Hugepages 00:08:15.112 node hugesize free / total 00:08:15.112 node0 1048576kB 0 / 0 00:08:15.112 node0 2048kB 2048 / 2048 00:08:15.112 node1 1048576kB 0 / 0 00:08:15.112 node1 2048kB 0 / 0 00:08:15.112 00:08:15.112 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:15.112 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:08:15.112 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:08:15.112 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:08:15.112 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:08:15.112 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:08:15.112 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:08:15.112 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:08:15.112 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:08:15.112 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:08:15.112 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:08:15.112 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:08:15.112 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:08:15.112 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:08:15.112 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:08:15.112 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:08:15.112 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:08:15.112 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:08:15.112 10:03:28 -- spdk/autotest.sh@141 -- # uname -s 00:08:15.112 10:03:28 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:08:15.112 10:03:28 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:08:15.112 10:03:28 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:08:18.410 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:18.669 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:21.958 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:08:23.864 10:03:36 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:24.803 10:03:37 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:24.803 10:03:37 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:24.803 10:03:37 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:08:24.803 10:03:37 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:08:24.803 10:03:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:24.803 10:03:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:24.803 10:03:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:24.803 10:03:37 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:24.803 10:03:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:24.803 10:03:38 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:24.803 10:03:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:1a:00.0 00:08:24.803 10:03:38 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:08:28.092 Waiting for block devices as requested 00:08:28.092 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:08:28.351 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:28.351 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:28.351 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:28.611 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:28.611 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:28.611 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:28.870 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:28.870 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:28.870 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:28.870 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:29.129 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:29.129 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:29.129 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:29.387 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:29.387 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:29.387 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:31.287 10:03:44 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:08:31.546 10:03:44 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:08:31.546 10:03:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:08:31.546 10:03:44 -- common/autotest_common.sh@1487 -- # grep 0000:1a:00.0/nvme/nvme 00:08:31.546 10:03:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:08:31.546 10:03:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:08:31.546 10:03:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:08:31.546 10:03:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:31.546 10:03:44 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:08:31.546 10:03:44 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:08:31.546 10:03:44 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:08:31.546 10:03:44 -- common/autotest_common.sh@1530 -- # grep oacs 00:08:31.546 10:03:44 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:08:31.546 10:03:44 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:08:31.546 10:03:44 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:08:31.546 10:03:44 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:08:31.546 10:03:44 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:08:31.546 10:03:44 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:08:31.546 10:03:44 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:08:31.546 10:03:44 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:08:31.546 10:03:44 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:08:31.547 10:03:44 -- common/autotest_common.sh@1542 -- # continue 00:08:31.547 10:03:44 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:08:31.547 10:03:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:31.547 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:08:31.547 10:03:44 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:08:31.547 10:03:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:31.547 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:08:31.547 10:03:44 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:08:34.833 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:34.833 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:38.122 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:08:40.023 10:03:52 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:08:40.023 10:03:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:40.023 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:08:40.023 10:03:52 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:08:40.023 10:03:52 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:08:40.023 10:03:52 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:08:40.023 10:03:52 -- common/autotest_common.sh@1562 -- # bdfs=() 00:08:40.023 10:03:52 -- common/autotest_common.sh@1562 -- # local bdfs 00:08:40.023 10:03:52 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:40.023 10:03:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:40.023 10:03:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:40.023 10:03:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:40.023 10:03:52 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:40.023 10:03:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:40.023 10:03:52 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:40.023 10:03:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:1a:00.0 00:08:40.023 10:03:52 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:08:40.023 10:03:52 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:08:40.023 10:03:52 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:08:40.023 10:03:52 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:40.023 10:03:52 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:08:40.023 10:03:52 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:1a:00.0 00:08:40.023 10:03:52 -- common/autotest_common.sh@1577 -- # [[ -z 0000:1a:00.0 ]] 00:08:40.023 10:03:52 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1146801 00:08:40.023 10:03:52 -- common/autotest_common.sh@1583 -- # waitforlisten 1146801 00:08:40.023 10:03:52 -- common/autotest_common.sh@819 -- # '[' -z 1146801 ']' 00:08:40.023 10:03:52 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:40.023 10:03:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.023 10:03:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:40.023 10:03:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.023 10:03:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:40.023 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:08:40.023 [2024-04-24 10:03:52.997012] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:40.023 [2024-04-24 10:03:52.997112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146801 ] 00:08:40.023 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.023 [2024-04-24 10:03:53.068730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.023 [2024-04-24 10:03:53.159605] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:40.023 [2024-04-24 10:03:53.159724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.589 10:03:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:40.589 10:03:53 -- common/autotest_common.sh@852 -- # return 0 00:08:40.589 10:03:53 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:08:40.589 10:03:53 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:08:40.589 10:03:53 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:08:43.876 nvme0n1 00:08:43.876 10:03:56 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:43.876 [2024-04-24 10:03:56.980015] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:08:43.876 request: 00:08:43.876 { 00:08:43.876 "nvme_ctrlr_name": "nvme0", 00:08:43.876 "password": "test", 00:08:43.876 "method": "bdev_nvme_opal_revert", 00:08:43.876 "req_id": 1 00:08:43.876 } 00:08:43.876 Got JSON-RPC error response 00:08:43.876 response: 00:08:43.876 { 00:08:43.876 "code": -32602, 00:08:43.876 "message": "Invalid parameters" 00:08:43.876 } 00:08:43.876 10:03:56 -- common/autotest_common.sh@1589 -- # true 00:08:43.876 10:03:56 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:08:43.876 10:03:56 -- common/autotest_common.sh@1593 -- # killprocess 1146801 00:08:43.876 10:03:56 -- common/autotest_common.sh@926 -- # '[' -z 1146801 ']' 00:08:43.876 10:03:56 -- common/autotest_common.sh@930 -- # kill -0 1146801 00:08:43.876 10:03:56 -- common/autotest_common.sh@931 -- # uname 00:08:43.876 10:03:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:43.876 10:03:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1146801 00:08:43.876 10:03:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:43.876 10:03:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:43.877 10:03:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1146801' 00:08:43.877 killing process with pid 1146801 00:08:43.877 10:03:57 -- common/autotest_common.sh@945 -- # kill 1146801 00:08:43.877 10:03:57 -- common/autotest_common.sh@950 -- # wait 1146801 00:08:48.077 10:04:00 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:08:48.077 10:04:00 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:08:48.077 10:04:00 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:48.077 10:04:00 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:48.077 10:04:00 -- spdk/autotest.sh@173 -- # timing_enter lib 00:08:48.077 10:04:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:48.077 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:08:48.077 10:04:00 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:08:48.077 10:04:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:48.077 10:04:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.077 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:08:48.077 ************************************ 00:08:48.077 START TEST env 00:08:48.077 ************************************ 00:08:48.077 10:04:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:08:48.077 * Looking for test storage... 00:08:48.077 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:08:48.077 10:04:01 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:08:48.077 10:04:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:48.077 10:04:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.077 10:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.077 ************************************ 00:08:48.077 START TEST env_memory 00:08:48.077 ************************************ 00:08:48.077 10:04:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:08:48.077 00:08:48.077 00:08:48.077 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.077 http://cunit.sourceforge.net/ 00:08:48.077 00:08:48.077 00:08:48.077 Suite: memory 00:08:48.077 Test: alloc and free memory map ...[2024-04-24 10:04:01.109161] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:48.077 passed 00:08:48.077 Test: mem map translation ...[2024-04-24 10:04:01.123267] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:48.077 [2024-04-24 10:04:01.123284] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:48.077 [2024-04-24 10:04:01.123316] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:48.077 [2024-04-24 10:04:01.123325] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:48.077 passed 00:08:48.077 Test: mem map registration ...[2024-04-24 10:04:01.145347] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:48.077 [2024-04-24 10:04:01.145367] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:48.077 passed 00:08:48.077 Test: mem map adjacent registrations ...passed 00:08:48.077 00:08:48.077 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.077 suites 1 1 n/a 0 0 00:08:48.077 tests 4 4 4 0 0 00:08:48.077 asserts 152 152 152 0 n/a 00:08:48.077 00:08:48.077 Elapsed time = 0.090 seconds 00:08:48.077 00:08:48.077 real 0m0.103s 00:08:48.077 user 0m0.091s 00:08:48.077 sys 0m0.012s 00:08:48.077 10:04:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.077 10:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.077 ************************************ 00:08:48.077 END TEST env_memory 00:08:48.077 ************************************ 00:08:48.077 10:04:01 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:48.077 10:04:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:48.077 10:04:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.077 10:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.077 ************************************ 00:08:48.077 START TEST env_vtophys 00:08:48.077 ************************************ 00:08:48.077 10:04:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:48.077 EAL: lib.eal log level changed from notice to debug 00:08:48.077 EAL: Detected lcore 0 as core 0 on socket 0 00:08:48.077 EAL: Detected lcore 1 as core 1 on socket 0 00:08:48.077 EAL: Detected lcore 2 as core 2 on socket 0 00:08:48.077 EAL: Detected lcore 3 as core 3 on socket 0 00:08:48.077 EAL: Detected lcore 4 as core 4 on socket 0 00:08:48.077 EAL: Detected lcore 5 as core 8 on socket 0 00:08:48.077 EAL: Detected lcore 6 as core 9 on socket 0 00:08:48.077 EAL: Detected lcore 7 as core 10 on socket 0 00:08:48.077 EAL: Detected lcore 8 as core 11 on socket 0 00:08:48.077 EAL: Detected lcore 9 as core 16 on socket 0 00:08:48.077 EAL: Detected lcore 10 as core 17 on socket 0 00:08:48.077 EAL: Detected lcore 11 as core 18 on socket 0 00:08:48.077 EAL: Detected lcore 12 as core 19 on socket 0 00:08:48.077 EAL: Detected lcore 13 as core 20 on socket 0 00:08:48.077 EAL: Detected lcore 14 as core 24 on socket 0 00:08:48.077 EAL: Detected lcore 15 as core 25 on socket 0 00:08:48.077 EAL: Detected lcore 16 as core 26 on socket 0 00:08:48.077 EAL: Detected lcore 17 as core 27 on socket 0 00:08:48.077 EAL: Detected lcore 18 as core 0 on socket 1 00:08:48.077 EAL: Detected lcore 19 as core 1 on socket 1 00:08:48.077 EAL: Detected lcore 20 as core 2 on socket 1 00:08:48.077 EAL: Detected lcore 21 as core 3 on socket 1 00:08:48.077 EAL: Detected lcore 22 as core 4 on socket 1 00:08:48.077 EAL: Detected lcore 23 as core 8 on socket 1 00:08:48.077 EAL: Detected lcore 24 as core 9 on socket 1 00:08:48.077 EAL: Detected lcore 25 as core 10 on socket 1 00:08:48.077 EAL: Detected lcore 26 as core 11 on socket 1 00:08:48.077 EAL: Detected lcore 27 as core 16 on socket 1 00:08:48.077 EAL: Detected lcore 28 as core 17 on socket 1 00:08:48.077 EAL: Detected lcore 29 as core 18 on socket 1 00:08:48.077 EAL: Detected lcore 30 as core 19 on socket 1 00:08:48.077 EAL: Detected lcore 31 as core 20 on socket 1 00:08:48.077 EAL: Detected lcore 32 as core 24 on socket 1 00:08:48.077 EAL: Detected lcore 33 as core 25 on socket 1 00:08:48.077 EAL: Detected lcore 34 as core 26 on socket 1 00:08:48.077 EAL: Detected lcore 35 as core 27 on socket 1 00:08:48.077 EAL: Detected lcore 36 as core 0 on socket 0 00:08:48.077 EAL: Detected lcore 37 as core 1 on socket 0 00:08:48.077 EAL: Detected lcore 38 as core 2 on socket 0 00:08:48.077 EAL: Detected lcore 39 as core 3 on socket 0 00:08:48.077 EAL: Detected lcore 40 as core 4 on socket 0 00:08:48.077 EAL: Detected lcore 41 as core 8 on socket 0 00:08:48.077 EAL: Detected lcore 42 as core 9 on socket 0 00:08:48.077 EAL: Detected lcore 43 as core 10 on socket 0 00:08:48.077 EAL: Detected lcore 44 as core 11 on socket 0 00:08:48.077 EAL: Detected lcore 45 as core 16 on socket 0 00:08:48.077 EAL: Detected lcore 46 as core 17 on socket 0 00:08:48.077 EAL: Detected lcore 47 as core 18 on socket 0 00:08:48.077 EAL: Detected lcore 48 as core 19 on socket 0 00:08:48.077 EAL: Detected lcore 49 as core 20 on socket 0 00:08:48.077 EAL: Detected lcore 50 as core 24 on socket 0 00:08:48.077 EAL: Detected lcore 51 as core 25 on socket 0 00:08:48.077 EAL: Detected lcore 52 as core 26 on socket 0 00:08:48.077 EAL: Detected lcore 53 as core 27 on socket 0 00:08:48.077 EAL: Detected lcore 54 as core 0 on socket 1 00:08:48.077 EAL: Detected lcore 55 as core 1 on socket 1 00:08:48.077 EAL: Detected lcore 56 as core 2 on socket 1 00:08:48.077 EAL: Detected lcore 57 as core 3 on socket 1 00:08:48.077 EAL: Detected lcore 58 as core 4 on socket 1 00:08:48.077 EAL: Detected lcore 59 as core 8 on socket 1 00:08:48.077 EAL: Detected lcore 60 as core 9 on socket 1 00:08:48.077 EAL: Detected lcore 61 as core 10 on socket 1 00:08:48.077 EAL: Detected lcore 62 as core 11 on socket 1 00:08:48.077 EAL: Detected lcore 63 as core 16 on socket 1 00:08:48.077 EAL: Detected lcore 64 as core 17 on socket 1 00:08:48.077 EAL: Detected lcore 65 as core 18 on socket 1 00:08:48.077 EAL: Detected lcore 66 as core 19 on socket 1 00:08:48.077 EAL: Detected lcore 67 as core 20 on socket 1 00:08:48.077 EAL: Detected lcore 68 as core 24 on socket 1 00:08:48.077 EAL: Detected lcore 69 as core 25 on socket 1 00:08:48.077 EAL: Detected lcore 70 as core 26 on socket 1 00:08:48.077 EAL: Detected lcore 71 as core 27 on socket 1 00:08:48.077 EAL: Maximum logical cores by configuration: 128 00:08:48.077 EAL: Detected CPU lcores: 72 00:08:48.077 EAL: Detected NUMA nodes: 2 00:08:48.077 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:08:48.077 EAL: Checking presence of .so 'librte_eal.so.24' 00:08:48.077 EAL: Checking presence of .so 'librte_eal.so' 00:08:48.077 EAL: Detected static linkage of DPDK 00:08:48.077 EAL: No shared files mode enabled, IPC will be disabled 00:08:48.077 EAL: Bus pci wants IOVA as 'DC' 00:08:48.077 EAL: Buses did not request a specific IOVA mode. 00:08:48.077 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:48.077 EAL: Selected IOVA mode 'VA' 00:08:48.077 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.077 EAL: Probing VFIO support... 00:08:48.077 EAL: IOMMU type 1 (Type 1) is supported 00:08:48.077 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:48.077 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:48.077 EAL: VFIO support initialized 00:08:48.077 EAL: Ask a virtual area of 0x2e000 bytes 00:08:48.077 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:48.077 EAL: Setting up physically contiguous memory... 00:08:48.077 EAL: Setting maximum number of open files to 524288 00:08:48.078 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:48.078 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:48.078 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:48.078 EAL: Ask a virtual area of 0x61000 bytes 00:08:48.078 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:48.078 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:48.078 EAL: Ask a virtual area of 0x400000000 bytes 00:08:48.078 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:48.078 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:48.078 EAL: Ask a virtual area of 0x61000 bytes 00:08:48.078 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:48.078 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:48.078 EAL: Ask a virtual area of 0x400000000 bytes 00:08:48.078 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:48.078 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:48.078 EAL: Ask a virtual area of 0x61000 bytes 00:08:48.078 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:48.078 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:48.078 EAL: Ask a virtual area of 0x400000000 bytes 00:08:48.078 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:48.078 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:48.078 EAL: Ask a virtual area of 0x61000 bytes 00:08:48.078 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:48.078 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:48.078 EAL: Ask a virtual area of 0x400000000 bytes 00:08:48.078 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:48.078 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:48.078 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:48.078 EAL: Ask a virtual area of 0x61000 bytes 00:08:48.078 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:48.078 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:48.078 EAL: Ask a virtual area of 0x400000000 bytes 00:08:48.078 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:48.078 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:48.078 EAL: Ask a virtual area of 0x61000 bytes 00:08:48.078 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:48.078 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:48.078 EAL: Ask a virtual area of 0x400000000 bytes 00:08:48.078 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:48.078 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:48.078 EAL: Ask a virtual area of 0x61000 bytes 00:08:48.078 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:48.078 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:48.078 EAL: Ask a virtual area of 0x400000000 bytes 00:08:48.078 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:48.078 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:48.078 EAL: Ask a virtual area of 0x61000 bytes 00:08:48.078 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:48.078 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:48.078 EAL: Ask a virtual area of 0x400000000 bytes 00:08:48.078 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:48.078 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:48.078 EAL: Hugepages will be freed exactly as allocated. 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: TSC frequency is ~2300000 KHz 00:08:48.078 EAL: Main lcore 0 is ready (tid=7fb8485fea00;cpuset=[0]) 00:08:48.078 EAL: Trying to obtain current memory policy. 00:08:48.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.078 EAL: Restoring previous memory policy: 0 00:08:48.078 EAL: request: mp_malloc_sync 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: Heap on socket 0 was expanded by 2MB 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: Mem event callback 'spdk:(nil)' registered 00:08:48.078 00:08:48.078 00:08:48.078 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.078 http://cunit.sourceforge.net/ 00:08:48.078 00:08:48.078 00:08:48.078 Suite: components_suite 00:08:48.078 Test: vtophys_malloc_test ...passed 00:08:48.078 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:48.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.078 EAL: Restoring previous memory policy: 4 00:08:48.078 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.078 EAL: request: mp_malloc_sync 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: Heap on socket 0 was expanded by 4MB 00:08:48.078 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.078 EAL: request: mp_malloc_sync 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: Heap on socket 0 was shrunk by 4MB 00:08:48.078 EAL: Trying to obtain current memory policy. 00:08:48.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.078 EAL: Restoring previous memory policy: 4 00:08:48.078 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.078 EAL: request: mp_malloc_sync 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: Heap on socket 0 was expanded by 6MB 00:08:48.078 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.078 EAL: request: mp_malloc_sync 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: Heap on socket 0 was shrunk by 6MB 00:08:48.078 EAL: Trying to obtain current memory policy. 00:08:48.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.078 EAL: Restoring previous memory policy: 4 00:08:48.078 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.078 EAL: request: mp_malloc_sync 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: Heap on socket 0 was expanded by 10MB 00:08:48.078 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.078 EAL: request: mp_malloc_sync 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: Heap on socket 0 was shrunk by 10MB 00:08:48.078 EAL: Trying to obtain current memory policy. 00:08:48.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.078 EAL: Restoring previous memory policy: 4 00:08:48.078 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.078 EAL: request: mp_malloc_sync 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: Heap on socket 0 was expanded by 18MB 00:08:48.078 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.078 EAL: request: mp_malloc_sync 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: Heap on socket 0 was shrunk by 18MB 00:08:48.078 EAL: Trying to obtain current memory policy. 00:08:48.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.078 EAL: Restoring previous memory policy: 4 00:08:48.078 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.078 EAL: request: mp_malloc_sync 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: Heap on socket 0 was expanded by 34MB 00:08:48.078 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.078 EAL: request: mp_malloc_sync 00:08:48.078 EAL: No shared files mode enabled, IPC is disabled 00:08:48.078 EAL: Heap on socket 0 was shrunk by 34MB 00:08:48.078 EAL: Trying to obtain current memory policy. 00:08:48.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.391 EAL: Restoring previous memory policy: 4 00:08:48.391 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.391 EAL: request: mp_malloc_sync 00:08:48.391 EAL: No shared files mode enabled, IPC is disabled 00:08:48.391 EAL: Heap on socket 0 was expanded by 66MB 00:08:48.391 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.391 EAL: request: mp_malloc_sync 00:08:48.391 EAL: No shared files mode enabled, IPC is disabled 00:08:48.391 EAL: Heap on socket 0 was shrunk by 66MB 00:08:48.391 EAL: Trying to obtain current memory policy. 00:08:48.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.391 EAL: Restoring previous memory policy: 4 00:08:48.391 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.391 EAL: request: mp_malloc_sync 00:08:48.391 EAL: No shared files mode enabled, IPC is disabled 00:08:48.391 EAL: Heap on socket 0 was expanded by 130MB 00:08:48.391 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.391 EAL: request: mp_malloc_sync 00:08:48.391 EAL: No shared files mode enabled, IPC is disabled 00:08:48.391 EAL: Heap on socket 0 was shrunk by 130MB 00:08:48.391 EAL: Trying to obtain current memory policy. 00:08:48.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.391 EAL: Restoring previous memory policy: 4 00:08:48.391 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.391 EAL: request: mp_malloc_sync 00:08:48.391 EAL: No shared files mode enabled, IPC is disabled 00:08:48.391 EAL: Heap on socket 0 was expanded by 258MB 00:08:48.391 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.391 EAL: request: mp_malloc_sync 00:08:48.391 EAL: No shared files mode enabled, IPC is disabled 00:08:48.391 EAL: Heap on socket 0 was shrunk by 258MB 00:08:48.391 EAL: Trying to obtain current memory policy. 00:08:48.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.650 EAL: Restoring previous memory policy: 4 00:08:48.650 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.650 EAL: request: mp_malloc_sync 00:08:48.650 EAL: No shared files mode enabled, IPC is disabled 00:08:48.650 EAL: Heap on socket 0 was expanded by 514MB 00:08:48.650 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.650 EAL: request: mp_malloc_sync 00:08:48.650 EAL: No shared files mode enabled, IPC is disabled 00:08:48.650 EAL: Heap on socket 0 was shrunk by 514MB 00:08:48.650 EAL: Trying to obtain current memory policy. 00:08:48.650 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.909 EAL: Restoring previous memory policy: 4 00:08:48.909 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.909 EAL: request: mp_malloc_sync 00:08:48.909 EAL: No shared files mode enabled, IPC is disabled 00:08:48.909 EAL: Heap on socket 0 was expanded by 1026MB 00:08:49.168 EAL: Calling mem event callback 'spdk:(nil)' 00:08:49.427 EAL: request: mp_malloc_sync 00:08:49.427 EAL: No shared files mode enabled, IPC is disabled 00:08:49.427 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:49.427 passed 00:08:49.427 00:08:49.427 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.427 suites 1 1 n/a 0 0 00:08:49.427 tests 2 2 2 0 0 00:08:49.427 asserts 497 497 497 0 n/a 00:08:49.427 00:08:49.427 Elapsed time = 1.104 seconds 00:08:49.427 EAL: Calling mem event callback 'spdk:(nil)' 00:08:49.427 EAL: request: mp_malloc_sync 00:08:49.427 EAL: No shared files mode enabled, IPC is disabled 00:08:49.427 EAL: Heap on socket 0 was shrunk by 2MB 00:08:49.427 EAL: No shared files mode enabled, IPC is disabled 00:08:49.427 EAL: No shared files mode enabled, IPC is disabled 00:08:49.427 EAL: No shared files mode enabled, IPC is disabled 00:08:49.427 00:08:49.427 real 0m1.233s 00:08:49.427 user 0m0.713s 00:08:49.427 sys 0m0.491s 00:08:49.427 10:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.427 10:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:49.427 ************************************ 00:08:49.427 END TEST env_vtophys 00:08:49.427 ************************************ 00:08:49.427 10:04:02 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:08:49.427 10:04:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:49.427 10:04:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:49.427 10:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:49.427 ************************************ 00:08:49.427 START TEST env_pci 00:08:49.427 ************************************ 00:08:49.427 10:04:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:08:49.427 00:08:49.427 00:08:49.427 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.427 http://cunit.sourceforge.net/ 00:08:49.427 00:08:49.427 00:08:49.427 Suite: pci 00:08:49.427 Test: pci_hook ...[2024-04-24 10:04:02.517396] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1148136 has claimed it 00:08:49.427 EAL: Cannot find device (10000:00:01.0) 00:08:49.427 EAL: Failed to attach device on primary process 00:08:49.427 passed 00:08:49.427 00:08:49.427 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.427 suites 1 1 n/a 0 0 00:08:49.427 tests 1 1 1 0 0 00:08:49.427 asserts 25 25 25 0 n/a 00:08:49.427 00:08:49.427 Elapsed time = 0.038 seconds 00:08:49.427 00:08:49.427 real 0m0.057s 00:08:49.427 user 0m0.013s 00:08:49.427 sys 0m0.044s 00:08:49.427 10:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.427 10:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:49.427 ************************************ 00:08:49.427 END TEST env_pci 00:08:49.427 ************************************ 00:08:49.427 10:04:02 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:49.427 10:04:02 -- env/env.sh@15 -- # uname 00:08:49.427 10:04:02 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:49.427 10:04:02 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:49.427 10:04:02 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:49.427 10:04:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:49.427 10:04:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:49.427 10:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:49.427 ************************************ 00:08:49.427 START TEST env_dpdk_post_init 00:08:49.427 ************************************ 00:08:49.427 10:04:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:49.427 EAL: Detected CPU lcores: 72 00:08:49.427 EAL: Detected NUMA nodes: 2 00:08:49.427 EAL: Detected static linkage of DPDK 00:08:49.427 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:49.427 EAL: Selected IOVA mode 'VA' 00:08:49.427 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.427 EAL: VFIO support initialized 00:08:49.427 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:49.686 EAL: Using IOMMU type 1 (Type 1) 00:08:50.253 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:08:55.529 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:08:55.529 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001000000 00:08:56.097 Starting DPDK initialization... 00:08:56.097 Starting SPDK post initialization... 00:08:56.097 SPDK NVMe probe 00:08:56.097 Attaching to 0000:1a:00.0 00:08:56.097 Attached to 0000:1a:00.0 00:08:56.097 Cleaning up... 00:08:56.097 00:08:56.097 real 0m6.494s 00:08:56.097 user 0m4.946s 00:08:56.097 sys 0m0.798s 00:08:56.097 10:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.097 10:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:56.097 ************************************ 00:08:56.097 END TEST env_dpdk_post_init 00:08:56.097 ************************************ 00:08:56.097 10:04:09 -- env/env.sh@26 -- # uname 00:08:56.097 10:04:09 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:56.097 10:04:09 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:56.097 10:04:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:56.097 10:04:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:56.097 10:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:56.097 ************************************ 00:08:56.097 START TEST env_mem_callbacks 00:08:56.097 ************************************ 00:08:56.097 10:04:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:56.097 EAL: Detected CPU lcores: 72 00:08:56.097 EAL: Detected NUMA nodes: 2 00:08:56.097 EAL: Detected static linkage of DPDK 00:08:56.097 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:56.097 EAL: Selected IOVA mode 'VA' 00:08:56.097 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.097 EAL: VFIO support initialized 00:08:56.097 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:56.097 00:08:56.097 00:08:56.097 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.097 http://cunit.sourceforge.net/ 00:08:56.097 00:08:56.097 00:08:56.097 Suite: memory 00:08:56.097 Test: test ... 00:08:56.097 register 0x200000200000 2097152 00:08:56.097 malloc 3145728 00:08:56.097 register 0x200000400000 4194304 00:08:56.097 buf 0x200000500000 len 3145728 PASSED 00:08:56.097 malloc 64 00:08:56.097 buf 0x2000004fff40 len 64 PASSED 00:08:56.097 malloc 4194304 00:08:56.097 register 0x200000800000 6291456 00:08:56.097 buf 0x200000a00000 len 4194304 PASSED 00:08:56.097 free 0x200000500000 3145728 00:08:56.097 free 0x2000004fff40 64 00:08:56.097 unregister 0x200000400000 4194304 PASSED 00:08:56.097 free 0x200000a00000 4194304 00:08:56.097 unregister 0x200000800000 6291456 PASSED 00:08:56.098 malloc 8388608 00:08:56.098 register 0x200000400000 10485760 00:08:56.098 buf 0x200000600000 len 8388608 PASSED 00:08:56.098 free 0x200000600000 8388608 00:08:56.098 unregister 0x200000400000 10485760 PASSED 00:08:56.098 passed 00:08:56.098 00:08:56.098 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.098 suites 1 1 n/a 0 0 00:08:56.098 tests 1 1 1 0 0 00:08:56.098 asserts 15 15 15 0 n/a 00:08:56.098 00:08:56.098 Elapsed time = 0.006 seconds 00:08:56.098 00:08:56.098 real 0m0.072s 00:08:56.098 user 0m0.016s 00:08:56.098 sys 0m0.055s 00:08:56.098 10:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.098 10:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:56.098 ************************************ 00:08:56.098 END TEST env_mem_callbacks 00:08:56.098 ************************************ 00:08:56.098 00:08:56.098 real 0m8.308s 00:08:56.098 user 0m5.908s 00:08:56.098 sys 0m1.666s 00:08:56.098 10:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.098 10:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:56.098 ************************************ 00:08:56.098 END TEST env 00:08:56.098 ************************************ 00:08:56.098 10:04:09 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:08:56.098 10:04:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:56.098 10:04:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:56.098 10:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:56.098 ************************************ 00:08:56.098 START TEST rpc 00:08:56.098 ************************************ 00:08:56.098 10:04:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:08:56.357 * Looking for test storage... 00:08:56.357 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:08:56.357 10:04:09 -- rpc/rpc.sh@65 -- # spdk_pid=1149141 00:08:56.357 10:04:09 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:56.357 10:04:09 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:56.357 10:04:09 -- rpc/rpc.sh@67 -- # waitforlisten 1149141 00:08:56.357 10:04:09 -- common/autotest_common.sh@819 -- # '[' -z 1149141 ']' 00:08:56.357 10:04:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.357 10:04:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:56.357 10:04:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.357 10:04:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:56.357 10:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:56.357 [2024-04-24 10:04:09.436720] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:56.357 [2024-04-24 10:04:09.436818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149141 ] 00:08:56.357 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.357 [2024-04-24 10:04:09.513198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.357 [2024-04-24 10:04:09.601027] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:56.357 [2024-04-24 10:04:09.601156] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:56.357 [2024-04-24 10:04:09.601167] app.c: 492:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1149141' to capture a snapshot of events at runtime. 00:08:56.357 [2024-04-24 10:04:09.601177] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1149141 for offline analysis/debug. 00:08:56.357 [2024-04-24 10:04:09.601200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.291 10:04:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:57.291 10:04:10 -- common/autotest_common.sh@852 -- # return 0 00:08:57.291 10:04:10 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:08:57.291 10:04:10 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:08:57.291 10:04:10 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:57.291 10:04:10 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:57.291 10:04:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:57.291 10:04:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.291 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.291 ************************************ 00:08:57.291 START TEST rpc_integrity 00:08:57.291 ************************************ 00:08:57.291 10:04:10 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:57.291 10:04:10 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:57.291 10:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.291 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.291 10:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.291 10:04:10 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:57.291 10:04:10 -- rpc/rpc.sh@13 -- # jq length 00:08:57.291 10:04:10 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:57.291 10:04:10 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:57.291 10:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.291 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.291 10:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.291 10:04:10 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:57.291 10:04:10 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:57.291 10:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.291 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.292 10:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.292 10:04:10 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:57.292 { 00:08:57.292 "name": "Malloc0", 00:08:57.292 "aliases": [ 00:08:57.292 "30c92dfa-e45f-4b81-9ee1-f076916cb2fa" 00:08:57.292 ], 00:08:57.292 "product_name": "Malloc disk", 00:08:57.292 "block_size": 512, 00:08:57.292 "num_blocks": 16384, 00:08:57.292 "uuid": "30c92dfa-e45f-4b81-9ee1-f076916cb2fa", 00:08:57.292 "assigned_rate_limits": { 00:08:57.292 "rw_ios_per_sec": 0, 00:08:57.292 "rw_mbytes_per_sec": 0, 00:08:57.292 "r_mbytes_per_sec": 0, 00:08:57.292 "w_mbytes_per_sec": 0 00:08:57.292 }, 00:08:57.292 "claimed": false, 00:08:57.292 "zoned": false, 00:08:57.292 "supported_io_types": { 00:08:57.292 "read": true, 00:08:57.292 "write": true, 00:08:57.292 "unmap": true, 00:08:57.292 "write_zeroes": true, 00:08:57.292 "flush": true, 00:08:57.292 "reset": true, 00:08:57.292 "compare": false, 00:08:57.292 "compare_and_write": false, 00:08:57.292 "abort": true, 00:08:57.292 "nvme_admin": false, 00:08:57.292 "nvme_io": false 00:08:57.292 }, 00:08:57.292 "memory_domains": [ 00:08:57.292 { 00:08:57.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.292 "dma_device_type": 2 00:08:57.292 } 00:08:57.292 ], 00:08:57.292 "driver_specific": {} 00:08:57.292 } 00:08:57.292 ]' 00:08:57.292 10:04:10 -- rpc/rpc.sh@17 -- # jq length 00:08:57.292 10:04:10 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:57.292 10:04:10 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:57.292 10:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.292 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.292 [2024-04-24 10:04:10.406368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:57.292 [2024-04-24 10:04:10.406404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.292 [2024-04-24 10:04:10.406421] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x506e5c0 00:08:57.292 [2024-04-24 10:04:10.406430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.292 [2024-04-24 10:04:10.407310] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.292 [2024-04-24 10:04:10.407332] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:57.292 Passthru0 00:08:57.292 10:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.292 10:04:10 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:57.292 10:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.292 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.292 10:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.292 10:04:10 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:57.292 { 00:08:57.292 "name": "Malloc0", 00:08:57.292 "aliases": [ 00:08:57.292 "30c92dfa-e45f-4b81-9ee1-f076916cb2fa" 00:08:57.292 ], 00:08:57.292 "product_name": "Malloc disk", 00:08:57.292 "block_size": 512, 00:08:57.292 "num_blocks": 16384, 00:08:57.292 "uuid": "30c92dfa-e45f-4b81-9ee1-f076916cb2fa", 00:08:57.292 "assigned_rate_limits": { 00:08:57.292 "rw_ios_per_sec": 0, 00:08:57.292 "rw_mbytes_per_sec": 0, 00:08:57.292 "r_mbytes_per_sec": 0, 00:08:57.292 "w_mbytes_per_sec": 0 00:08:57.292 }, 00:08:57.292 "claimed": true, 00:08:57.292 "claim_type": "exclusive_write", 00:08:57.292 "zoned": false, 00:08:57.292 "supported_io_types": { 00:08:57.292 "read": true, 00:08:57.292 "write": true, 00:08:57.292 "unmap": true, 00:08:57.292 "write_zeroes": true, 00:08:57.292 "flush": true, 00:08:57.292 "reset": true, 00:08:57.292 "compare": false, 00:08:57.292 "compare_and_write": false, 00:08:57.292 "abort": true, 00:08:57.292 "nvme_admin": false, 00:08:57.292 "nvme_io": false 00:08:57.292 }, 00:08:57.292 "memory_domains": [ 00:08:57.292 { 00:08:57.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.292 "dma_device_type": 2 00:08:57.292 } 00:08:57.292 ], 00:08:57.292 "driver_specific": {} 00:08:57.292 }, 00:08:57.292 { 00:08:57.292 "name": "Passthru0", 00:08:57.292 "aliases": [ 00:08:57.292 "f8cbb686-1fbd-5a2c-a4e8-3e74daaf7645" 00:08:57.292 ], 00:08:57.292 "product_name": "passthru", 00:08:57.292 "block_size": 512, 00:08:57.292 "num_blocks": 16384, 00:08:57.292 "uuid": "f8cbb686-1fbd-5a2c-a4e8-3e74daaf7645", 00:08:57.292 "assigned_rate_limits": { 00:08:57.292 "rw_ios_per_sec": 0, 00:08:57.292 "rw_mbytes_per_sec": 0, 00:08:57.292 "r_mbytes_per_sec": 0, 00:08:57.292 "w_mbytes_per_sec": 0 00:08:57.292 }, 00:08:57.292 "claimed": false, 00:08:57.292 "zoned": false, 00:08:57.292 "supported_io_types": { 00:08:57.292 "read": true, 00:08:57.292 "write": true, 00:08:57.292 "unmap": true, 00:08:57.292 "write_zeroes": true, 00:08:57.292 "flush": true, 00:08:57.292 "reset": true, 00:08:57.292 "compare": false, 00:08:57.292 "compare_and_write": false, 00:08:57.292 "abort": true, 00:08:57.292 "nvme_admin": false, 00:08:57.292 "nvme_io": false 00:08:57.292 }, 00:08:57.292 "memory_domains": [ 00:08:57.292 { 00:08:57.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.292 "dma_device_type": 2 00:08:57.292 } 00:08:57.292 ], 00:08:57.292 "driver_specific": { 00:08:57.292 "passthru": { 00:08:57.292 "name": "Passthru0", 00:08:57.292 "base_bdev_name": "Malloc0" 00:08:57.292 } 00:08:57.292 } 00:08:57.292 } 00:08:57.292 ]' 00:08:57.292 10:04:10 -- rpc/rpc.sh@21 -- # jq length 00:08:57.292 10:04:10 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:57.292 10:04:10 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:57.292 10:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.292 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.292 10:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.292 10:04:10 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:57.292 10:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.292 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.292 10:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.292 10:04:10 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:57.292 10:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.292 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.292 10:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.292 10:04:10 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:57.292 10:04:10 -- rpc/rpc.sh@26 -- # jq length 00:08:57.292 10:04:10 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:57.292 00:08:57.292 real 0m0.264s 00:08:57.292 user 0m0.159s 00:08:57.292 sys 0m0.050s 00:08:57.292 10:04:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.292 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.292 ************************************ 00:08:57.292 END TEST rpc_integrity 00:08:57.292 ************************************ 00:08:57.550 10:04:10 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:57.550 10:04:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:57.550 10:04:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.550 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.550 ************************************ 00:08:57.550 START TEST rpc_plugins 00:08:57.550 ************************************ 00:08:57.550 10:04:10 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:08:57.550 10:04:10 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:57.550 10:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.550 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.550 10:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.550 10:04:10 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:57.550 10:04:10 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:57.550 10:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.550 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.550 10:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.550 10:04:10 -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:57.550 { 00:08:57.550 "name": "Malloc1", 00:08:57.550 "aliases": [ 00:08:57.550 "c67b6f97-de4c-408b-95cf-f8223d6dbd5f" 00:08:57.550 ], 00:08:57.550 "product_name": "Malloc disk", 00:08:57.550 "block_size": 4096, 00:08:57.550 "num_blocks": 256, 00:08:57.550 "uuid": "c67b6f97-de4c-408b-95cf-f8223d6dbd5f", 00:08:57.550 "assigned_rate_limits": { 00:08:57.550 "rw_ios_per_sec": 0, 00:08:57.550 "rw_mbytes_per_sec": 0, 00:08:57.550 "r_mbytes_per_sec": 0, 00:08:57.550 "w_mbytes_per_sec": 0 00:08:57.550 }, 00:08:57.550 "claimed": false, 00:08:57.550 "zoned": false, 00:08:57.550 "supported_io_types": { 00:08:57.550 "read": true, 00:08:57.550 "write": true, 00:08:57.550 "unmap": true, 00:08:57.550 "write_zeroes": true, 00:08:57.550 "flush": true, 00:08:57.550 "reset": true, 00:08:57.550 "compare": false, 00:08:57.550 "compare_and_write": false, 00:08:57.550 "abort": true, 00:08:57.550 "nvme_admin": false, 00:08:57.550 "nvme_io": false 00:08:57.550 }, 00:08:57.550 "memory_domains": [ 00:08:57.550 { 00:08:57.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.550 "dma_device_type": 2 00:08:57.550 } 00:08:57.550 ], 00:08:57.550 "driver_specific": {} 00:08:57.550 } 00:08:57.550 ]' 00:08:57.550 10:04:10 -- rpc/rpc.sh@32 -- # jq length 00:08:57.550 10:04:10 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:57.550 10:04:10 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:57.550 10:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.550 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.550 10:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.550 10:04:10 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:57.550 10:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.550 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.550 10:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.550 10:04:10 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:57.550 10:04:10 -- rpc/rpc.sh@36 -- # jq length 00:08:57.550 10:04:10 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:57.550 00:08:57.550 real 0m0.132s 00:08:57.550 user 0m0.085s 00:08:57.550 sys 0m0.017s 00:08:57.550 10:04:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.550 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.550 ************************************ 00:08:57.550 END TEST rpc_plugins 00:08:57.550 ************************************ 00:08:57.550 10:04:10 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:57.550 10:04:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:57.550 10:04:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.550 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.550 ************************************ 00:08:57.550 START TEST rpc_trace_cmd_test 00:08:57.551 ************************************ 00:08:57.551 10:04:10 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:08:57.551 10:04:10 -- rpc/rpc.sh@40 -- # local info 00:08:57.551 10:04:10 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:57.551 10:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.551 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.551 10:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.551 10:04:10 -- rpc/rpc.sh@42 -- # info='{ 00:08:57.551 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1149141", 00:08:57.551 "tpoint_group_mask": "0x8", 00:08:57.551 "iscsi_conn": { 00:08:57.551 "mask": "0x2", 00:08:57.551 "tpoint_mask": "0x0" 00:08:57.551 }, 00:08:57.551 "scsi": { 00:08:57.551 "mask": "0x4", 00:08:57.551 "tpoint_mask": "0x0" 00:08:57.551 }, 00:08:57.551 "bdev": { 00:08:57.551 "mask": "0x8", 00:08:57.551 "tpoint_mask": "0xffffffffffffffff" 00:08:57.551 }, 00:08:57.551 "nvmf_rdma": { 00:08:57.551 "mask": "0x10", 00:08:57.551 "tpoint_mask": "0x0" 00:08:57.551 }, 00:08:57.551 "nvmf_tcp": { 00:08:57.551 "mask": "0x20", 00:08:57.551 "tpoint_mask": "0x0" 00:08:57.551 }, 00:08:57.551 "ftl": { 00:08:57.551 "mask": "0x40", 00:08:57.551 "tpoint_mask": "0x0" 00:08:57.551 }, 00:08:57.551 "blobfs": { 00:08:57.551 "mask": "0x80", 00:08:57.551 "tpoint_mask": "0x0" 00:08:57.551 }, 00:08:57.551 "dsa": { 00:08:57.551 "mask": "0x200", 00:08:57.551 "tpoint_mask": "0x0" 00:08:57.551 }, 00:08:57.551 "thread": { 00:08:57.551 "mask": "0x400", 00:08:57.551 "tpoint_mask": "0x0" 00:08:57.551 }, 00:08:57.551 "nvme_pcie": { 00:08:57.551 "mask": "0x800", 00:08:57.551 "tpoint_mask": "0x0" 00:08:57.551 }, 00:08:57.551 "iaa": { 00:08:57.551 "mask": "0x1000", 00:08:57.551 "tpoint_mask": "0x0" 00:08:57.551 }, 00:08:57.551 "nvme_tcp": { 00:08:57.551 "mask": "0x2000", 00:08:57.551 "tpoint_mask": "0x0" 00:08:57.551 }, 00:08:57.551 "bdev_nvme": { 00:08:57.551 "mask": "0x4000", 00:08:57.551 "tpoint_mask": "0x0" 00:08:57.551 } 00:08:57.551 }' 00:08:57.551 10:04:10 -- rpc/rpc.sh@43 -- # jq length 00:08:57.808 10:04:10 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:08:57.808 10:04:10 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:57.808 10:04:10 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:57.808 10:04:10 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:57.808 10:04:10 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:57.808 10:04:10 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:57.808 10:04:10 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:57.808 10:04:10 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:57.808 10:04:10 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:57.808 00:08:57.808 real 0m0.187s 00:08:57.808 user 0m0.154s 00:08:57.808 sys 0m0.026s 00:08:57.808 10:04:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.808 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.808 ************************************ 00:08:57.808 END TEST rpc_trace_cmd_test 00:08:57.808 ************************************ 00:08:57.808 10:04:11 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:57.808 10:04:11 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:57.808 10:04:11 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:57.808 10:04:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:57.808 10:04:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.808 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:57.808 ************************************ 00:08:57.808 START TEST rpc_daemon_integrity 00:08:57.808 ************************************ 00:08:57.808 10:04:11 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:57.808 10:04:11 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:57.808 10:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.808 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:57.808 10:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.808 10:04:11 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:57.808 10:04:11 -- rpc/rpc.sh@13 -- # jq length 00:08:57.808 10:04:11 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:57.808 10:04:11 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:57.808 10:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.808 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:57.808 10:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.808 10:04:11 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:57.808 10:04:11 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:57.808 10:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.808 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.065 10:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.065 10:04:11 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:58.065 { 00:08:58.065 "name": "Malloc2", 00:08:58.065 "aliases": [ 00:08:58.065 "5957a1b9-6674-4c15-8e12-76ba811412ea" 00:08:58.065 ], 00:08:58.065 "product_name": "Malloc disk", 00:08:58.065 "block_size": 512, 00:08:58.065 "num_blocks": 16384, 00:08:58.065 "uuid": "5957a1b9-6674-4c15-8e12-76ba811412ea", 00:08:58.065 "assigned_rate_limits": { 00:08:58.065 "rw_ios_per_sec": 0, 00:08:58.065 "rw_mbytes_per_sec": 0, 00:08:58.065 "r_mbytes_per_sec": 0, 00:08:58.065 "w_mbytes_per_sec": 0 00:08:58.065 }, 00:08:58.065 "claimed": false, 00:08:58.065 "zoned": false, 00:08:58.065 "supported_io_types": { 00:08:58.065 "read": true, 00:08:58.065 "write": true, 00:08:58.065 "unmap": true, 00:08:58.065 "write_zeroes": true, 00:08:58.065 "flush": true, 00:08:58.065 "reset": true, 00:08:58.065 "compare": false, 00:08:58.065 "compare_and_write": false, 00:08:58.065 "abort": true, 00:08:58.065 "nvme_admin": false, 00:08:58.065 "nvme_io": false 00:08:58.065 }, 00:08:58.065 "memory_domains": [ 00:08:58.065 { 00:08:58.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.065 "dma_device_type": 2 00:08:58.065 } 00:08:58.065 ], 00:08:58.065 "driver_specific": {} 00:08:58.065 } 00:08:58.065 ]' 00:08:58.065 10:04:11 -- rpc/rpc.sh@17 -- # jq length 00:08:58.065 10:04:11 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:58.065 10:04:11 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:58.065 10:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.065 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.065 [2024-04-24 10:04:11.144271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:58.065 [2024-04-24 10:04:11.144306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.065 [2024-04-24 10:04:11.144323] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51fd550 00:08:58.065 [2024-04-24 10:04:11.144332] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.065 [2024-04-24 10:04:11.145056] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.065 [2024-04-24 10:04:11.145086] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:58.065 Passthru0 00:08:58.065 10:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.065 10:04:11 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:58.065 10:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.065 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.065 10:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.065 10:04:11 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:58.065 { 00:08:58.065 "name": "Malloc2", 00:08:58.065 "aliases": [ 00:08:58.065 "5957a1b9-6674-4c15-8e12-76ba811412ea" 00:08:58.065 ], 00:08:58.065 "product_name": "Malloc disk", 00:08:58.065 "block_size": 512, 00:08:58.065 "num_blocks": 16384, 00:08:58.065 "uuid": "5957a1b9-6674-4c15-8e12-76ba811412ea", 00:08:58.065 "assigned_rate_limits": { 00:08:58.065 "rw_ios_per_sec": 0, 00:08:58.065 "rw_mbytes_per_sec": 0, 00:08:58.065 "r_mbytes_per_sec": 0, 00:08:58.065 "w_mbytes_per_sec": 0 00:08:58.065 }, 00:08:58.065 "claimed": true, 00:08:58.065 "claim_type": "exclusive_write", 00:08:58.065 "zoned": false, 00:08:58.065 "supported_io_types": { 00:08:58.065 "read": true, 00:08:58.065 "write": true, 00:08:58.065 "unmap": true, 00:08:58.065 "write_zeroes": true, 00:08:58.065 "flush": true, 00:08:58.065 "reset": true, 00:08:58.065 "compare": false, 00:08:58.065 "compare_and_write": false, 00:08:58.065 "abort": true, 00:08:58.065 "nvme_admin": false, 00:08:58.065 "nvme_io": false 00:08:58.065 }, 00:08:58.065 "memory_domains": [ 00:08:58.065 { 00:08:58.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.065 "dma_device_type": 2 00:08:58.065 } 00:08:58.065 ], 00:08:58.065 "driver_specific": {} 00:08:58.065 }, 00:08:58.065 { 00:08:58.065 "name": "Passthru0", 00:08:58.065 "aliases": [ 00:08:58.065 "a397d71d-adeb-5285-8c45-005a47b2b543" 00:08:58.065 ], 00:08:58.065 "product_name": "passthru", 00:08:58.065 "block_size": 512, 00:08:58.065 "num_blocks": 16384, 00:08:58.065 "uuid": "a397d71d-adeb-5285-8c45-005a47b2b543", 00:08:58.065 "assigned_rate_limits": { 00:08:58.065 "rw_ios_per_sec": 0, 00:08:58.065 "rw_mbytes_per_sec": 0, 00:08:58.065 "r_mbytes_per_sec": 0, 00:08:58.065 "w_mbytes_per_sec": 0 00:08:58.065 }, 00:08:58.065 "claimed": false, 00:08:58.065 "zoned": false, 00:08:58.065 "supported_io_types": { 00:08:58.066 "read": true, 00:08:58.066 "write": true, 00:08:58.066 "unmap": true, 00:08:58.066 "write_zeroes": true, 00:08:58.066 "flush": true, 00:08:58.066 "reset": true, 00:08:58.066 "compare": false, 00:08:58.066 "compare_and_write": false, 00:08:58.066 "abort": true, 00:08:58.066 "nvme_admin": false, 00:08:58.066 "nvme_io": false 00:08:58.066 }, 00:08:58.066 "memory_domains": [ 00:08:58.066 { 00:08:58.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.066 "dma_device_type": 2 00:08:58.066 } 00:08:58.066 ], 00:08:58.066 "driver_specific": { 00:08:58.066 "passthru": { 00:08:58.066 "name": "Passthru0", 00:08:58.066 "base_bdev_name": "Malloc2" 00:08:58.066 } 00:08:58.066 } 00:08:58.066 } 00:08:58.066 ]' 00:08:58.066 10:04:11 -- rpc/rpc.sh@21 -- # jq length 00:08:58.066 10:04:11 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:58.066 10:04:11 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:58.066 10:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.066 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.066 10:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.066 10:04:11 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:58.066 10:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.066 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.066 10:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.066 10:04:11 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:58.066 10:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.066 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.066 10:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.066 10:04:11 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:58.066 10:04:11 -- rpc/rpc.sh@26 -- # jq length 00:08:58.066 10:04:11 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:58.066 00:08:58.066 real 0m0.268s 00:08:58.066 user 0m0.165s 00:08:58.066 sys 0m0.043s 00:08:58.066 10:04:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.066 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.066 ************************************ 00:08:58.066 END TEST rpc_daemon_integrity 00:08:58.066 ************************************ 00:08:58.066 10:04:11 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:58.066 10:04:11 -- rpc/rpc.sh@84 -- # killprocess 1149141 00:08:58.066 10:04:11 -- common/autotest_common.sh@926 -- # '[' -z 1149141 ']' 00:08:58.066 10:04:11 -- common/autotest_common.sh@930 -- # kill -0 1149141 00:08:58.066 10:04:11 -- common/autotest_common.sh@931 -- # uname 00:08:58.066 10:04:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:58.066 10:04:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1149141 00:08:58.323 10:04:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:58.323 10:04:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:58.323 10:04:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1149141' 00:08:58.323 killing process with pid 1149141 00:08:58.323 10:04:11 -- common/autotest_common.sh@945 -- # kill 1149141 00:08:58.323 10:04:11 -- common/autotest_common.sh@950 -- # wait 1149141 00:08:58.581 00:08:58.581 real 0m2.355s 00:08:58.581 user 0m2.935s 00:08:58.581 sys 0m0.727s 00:08:58.581 10:04:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.581 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.581 ************************************ 00:08:58.581 END TEST rpc 00:08:58.581 ************************************ 00:08:58.581 10:04:11 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:58.581 10:04:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.581 10:04:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.581 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.581 ************************************ 00:08:58.581 START TEST rpc_client 00:08:58.581 ************************************ 00:08:58.581 10:04:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:58.581 * Looking for test storage... 00:08:58.581 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:08:58.581 10:04:11 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:58.581 OK 00:08:58.581 10:04:11 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:58.581 00:08:58.581 real 0m0.120s 00:08:58.581 user 0m0.048s 00:08:58.581 sys 0m0.082s 00:08:58.581 10:04:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.581 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.581 ************************************ 00:08:58.581 END TEST rpc_client 00:08:58.581 ************************************ 00:08:58.839 10:04:11 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:08:58.839 10:04:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.839 10:04:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.839 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.839 ************************************ 00:08:58.839 START TEST json_config 00:08:58.839 ************************************ 00:08:58.839 10:04:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:08:58.839 10:04:11 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.839 10:04:11 -- nvmf/common.sh@7 -- # uname -s 00:08:58.839 10:04:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.839 10:04:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.839 10:04:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.839 10:04:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.839 10:04:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.839 10:04:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.839 10:04:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.839 10:04:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.839 10:04:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.839 10:04:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.839 10:04:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:08:58.839 10:04:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:08:58.839 10:04:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.839 10:04:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.839 10:04:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:58.839 10:04:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:58.839 10:04:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.839 10:04:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.839 10:04:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.839 10:04:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.839 10:04:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.839 10:04:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.839 10:04:11 -- paths/export.sh@5 -- # export PATH 00:08:58.839 10:04:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.839 10:04:11 -- nvmf/common.sh@46 -- # : 0 00:08:58.839 10:04:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:58.839 10:04:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:58.839 10:04:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:58.839 10:04:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.839 10:04:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.839 10:04:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:58.839 10:04:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:58.839 10:04:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:58.839 10:04:11 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:08:58.839 10:04:11 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:08:58.839 10:04:11 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:08:58.839 10:04:11 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:58.839 10:04:11 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:58.839 WARNING: No tests are enabled so not running JSON configuration tests 00:08:58.839 10:04:11 -- json_config/json_config.sh@27 -- # exit 0 00:08:58.839 00:08:58.839 real 0m0.104s 00:08:58.839 user 0m0.048s 00:08:58.839 sys 0m0.057s 00:08:58.839 10:04:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.839 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.839 ************************************ 00:08:58.839 END TEST json_config 00:08:58.840 ************************************ 00:08:58.840 10:04:12 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:58.840 10:04:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.840 10:04:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.840 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:58.840 ************************************ 00:08:58.840 START TEST json_config_extra_key 00:08:58.840 ************************************ 00:08:58.840 10:04:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:58.840 10:04:12 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.840 10:04:12 -- nvmf/common.sh@7 -- # uname -s 00:08:58.840 10:04:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.840 10:04:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.840 10:04:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.840 10:04:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.840 10:04:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.840 10:04:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.840 10:04:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.840 10:04:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.840 10:04:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.840 10:04:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.098 10:04:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:08:59.098 10:04:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:08:59.098 10:04:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.098 10:04:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.098 10:04:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:59.098 10:04:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:59.098 10:04:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.098 10:04:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.098 10:04:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.098 10:04:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.099 10:04:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.099 10:04:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.099 10:04:12 -- paths/export.sh@5 -- # export PATH 00:08:59.099 10:04:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.099 10:04:12 -- nvmf/common.sh@46 -- # : 0 00:08:59.099 10:04:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:59.099 10:04:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:59.099 10:04:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:59.099 10:04:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.099 10:04:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.099 10:04:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:59.099 10:04:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:59.099 10:04:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:08:59.099 INFO: launching applications... 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@25 -- # shift 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1149728 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:08:59.099 Waiting for target to run... 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1149728 /var/tmp/spdk_tgt.sock 00:08:59.099 10:04:12 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:08:59.099 10:04:12 -- common/autotest_common.sh@819 -- # '[' -z 1149728 ']' 00:08:59.099 10:04:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:59.099 10:04:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:59.099 10:04:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:59.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:59.099 10:04:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:59.099 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.099 [2024-04-24 10:04:12.163203] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:59.099 [2024-04-24 10:04:12.163281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149728 ] 00:08:59.099 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.356 [2024-04-24 10:04:12.459367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.356 [2024-04-24 10:04:12.530003] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:59.356 [2024-04-24 10:04:12.530131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.920 10:04:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:59.920 10:04:12 -- common/autotest_common.sh@852 -- # return 0 00:08:59.920 10:04:12 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:08:59.920 00:08:59.920 10:04:12 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:08:59.920 INFO: shutting down applications... 00:08:59.920 10:04:12 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:08:59.920 10:04:12 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:08:59.920 10:04:12 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:08:59.920 10:04:12 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1149728 ]] 00:08:59.920 10:04:12 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1149728 00:08:59.920 10:04:12 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:08:59.920 10:04:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:59.920 10:04:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1149728 00:08:59.920 10:04:12 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:00.486 10:04:13 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:00.486 10:04:13 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:00.486 10:04:13 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1149728 00:09:00.486 10:04:13 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:09:00.486 10:04:13 -- json_config/json_config_extra_key.sh@52 -- # break 00:09:00.486 10:04:13 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:09:00.486 10:04:13 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:09:00.486 SPDK target shutdown done 00:09:00.486 10:04:13 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:09:00.486 Success 00:09:00.486 00:09:00.486 real 0m1.441s 00:09:00.486 user 0m1.220s 00:09:00.486 sys 0m0.396s 00:09:00.486 10:04:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.486 10:04:13 -- common/autotest_common.sh@10 -- # set +x 00:09:00.486 ************************************ 00:09:00.486 END TEST json_config_extra_key 00:09:00.486 ************************************ 00:09:00.486 10:04:13 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:00.486 10:04:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:00.486 10:04:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:00.486 10:04:13 -- common/autotest_common.sh@10 -- # set +x 00:09:00.486 ************************************ 00:09:00.486 START TEST alias_rpc 00:09:00.486 ************************************ 00:09:00.486 10:04:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:00.486 * Looking for test storage... 00:09:00.486 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:09:00.486 10:04:13 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:00.486 10:04:13 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1149960 00:09:00.486 10:04:13 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:09:00.486 10:04:13 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1149960 00:09:00.486 10:04:13 -- common/autotest_common.sh@819 -- # '[' -z 1149960 ']' 00:09:00.486 10:04:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.486 10:04:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:00.486 10:04:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.486 10:04:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:00.486 10:04:13 -- common/autotest_common.sh@10 -- # set +x 00:09:00.486 [2024-04-24 10:04:13.660188] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:00.486 [2024-04-24 10:04:13.660286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149960 ] 00:09:00.486 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.486 [2024-04-24 10:04:13.737954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.744 [2024-04-24 10:04:13.825089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:00.744 [2024-04-24 10:04:13.825212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.310 10:04:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:01.310 10:04:14 -- common/autotest_common.sh@852 -- # return 0 00:09:01.310 10:04:14 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:01.569 10:04:14 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1149960 00:09:01.569 10:04:14 -- common/autotest_common.sh@926 -- # '[' -z 1149960 ']' 00:09:01.569 10:04:14 -- common/autotest_common.sh@930 -- # kill -0 1149960 00:09:01.569 10:04:14 -- common/autotest_common.sh@931 -- # uname 00:09:01.569 10:04:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:01.569 10:04:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1149960 00:09:01.569 10:04:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:01.569 10:04:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:01.569 10:04:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1149960' 00:09:01.569 killing process with pid 1149960 00:09:01.569 10:04:14 -- common/autotest_common.sh@945 -- # kill 1149960 00:09:01.569 10:04:14 -- common/autotest_common.sh@950 -- # wait 1149960 00:09:01.827 00:09:01.827 real 0m1.505s 00:09:01.827 user 0m1.561s 00:09:01.827 sys 0m0.476s 00:09:01.827 10:04:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.827 10:04:15 -- common/autotest_common.sh@10 -- # set +x 00:09:01.827 ************************************ 00:09:01.827 END TEST alias_rpc 00:09:01.827 ************************************ 00:09:01.827 10:04:15 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:09:01.827 10:04:15 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:01.827 10:04:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:01.827 10:04:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:01.827 10:04:15 -- common/autotest_common.sh@10 -- # set +x 00:09:01.827 ************************************ 00:09:01.827 START TEST spdkcli_tcp 00:09:01.827 ************************************ 00:09:01.827 10:04:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:02.086 * Looking for test storage... 00:09:02.086 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:09:02.086 10:04:15 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:09:02.086 10:04:15 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:02.086 10:04:15 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:09:02.086 10:04:15 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:02.086 10:04:15 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:02.086 10:04:15 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:02.086 10:04:15 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:02.086 10:04:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:02.086 10:04:15 -- common/autotest_common.sh@10 -- # set +x 00:09:02.086 10:04:15 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1150284 00:09:02.086 10:04:15 -- spdkcli/tcp.sh@27 -- # waitforlisten 1150284 00:09:02.086 10:04:15 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:02.086 10:04:15 -- common/autotest_common.sh@819 -- # '[' -z 1150284 ']' 00:09:02.086 10:04:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.086 10:04:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:02.086 10:04:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.086 10:04:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:02.086 10:04:15 -- common/autotest_common.sh@10 -- # set +x 00:09:02.086 [2024-04-24 10:04:15.208294] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:02.086 [2024-04-24 10:04:15.208391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150284 ] 00:09:02.086 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.086 [2024-04-24 10:04:15.283370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:02.345 [2024-04-24 10:04:15.371887] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:02.345 [2024-04-24 10:04:15.372028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.345 [2024-04-24 10:04:15.372032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.911 10:04:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:02.911 10:04:16 -- common/autotest_common.sh@852 -- # return 0 00:09:02.911 10:04:16 -- spdkcli/tcp.sh@31 -- # socat_pid=1150374 00:09:02.911 10:04:16 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:02.911 10:04:16 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:03.169 [ 00:09:03.169 "spdk_get_version", 00:09:03.169 "rpc_get_methods", 00:09:03.169 "trace_get_info", 00:09:03.169 "trace_get_tpoint_group_mask", 00:09:03.169 "trace_disable_tpoint_group", 00:09:03.169 "trace_enable_tpoint_group", 00:09:03.169 "trace_clear_tpoint_mask", 00:09:03.169 "trace_set_tpoint_mask", 00:09:03.169 "vfu_tgt_set_base_path", 00:09:03.169 "framework_get_pci_devices", 00:09:03.169 "framework_get_config", 00:09:03.169 "framework_get_subsystems", 00:09:03.169 "iobuf_get_stats", 00:09:03.169 "iobuf_set_options", 00:09:03.169 "sock_set_default_impl", 00:09:03.169 "sock_impl_set_options", 00:09:03.169 "sock_impl_get_options", 00:09:03.169 "vmd_rescan", 00:09:03.169 "vmd_remove_device", 00:09:03.170 "vmd_enable", 00:09:03.170 "accel_get_stats", 00:09:03.170 "accel_set_options", 00:09:03.170 "accel_set_driver", 00:09:03.170 "accel_crypto_key_destroy", 00:09:03.170 "accel_crypto_keys_get", 00:09:03.170 "accel_crypto_key_create", 00:09:03.170 "accel_assign_opc", 00:09:03.170 "accel_get_module_info", 00:09:03.170 "accel_get_opc_assignments", 00:09:03.170 "notify_get_notifications", 00:09:03.170 "notify_get_types", 00:09:03.170 "bdev_get_histogram", 00:09:03.170 "bdev_enable_histogram", 00:09:03.170 "bdev_set_qos_limit", 00:09:03.170 "bdev_set_qd_sampling_period", 00:09:03.170 "bdev_get_bdevs", 00:09:03.170 "bdev_reset_iostat", 00:09:03.170 "bdev_get_iostat", 00:09:03.170 "bdev_examine", 00:09:03.170 "bdev_wait_for_examine", 00:09:03.170 "bdev_set_options", 00:09:03.170 "scsi_get_devices", 00:09:03.170 "thread_set_cpumask", 00:09:03.170 "framework_get_scheduler", 00:09:03.170 "framework_set_scheduler", 00:09:03.170 "framework_get_reactors", 00:09:03.170 "thread_get_io_channels", 00:09:03.170 "thread_get_pollers", 00:09:03.170 "thread_get_stats", 00:09:03.170 "framework_monitor_context_switch", 00:09:03.170 "spdk_kill_instance", 00:09:03.170 "log_enable_timestamps", 00:09:03.170 "log_get_flags", 00:09:03.170 "log_clear_flag", 00:09:03.170 "log_set_flag", 00:09:03.170 "log_get_level", 00:09:03.170 "log_set_level", 00:09:03.170 "log_get_print_level", 00:09:03.170 "log_set_print_level", 00:09:03.170 "framework_enable_cpumask_locks", 00:09:03.170 "framework_disable_cpumask_locks", 00:09:03.170 "framework_wait_init", 00:09:03.170 "framework_start_init", 00:09:03.170 "virtio_blk_create_transport", 00:09:03.170 "virtio_blk_get_transports", 00:09:03.170 "vhost_controller_set_coalescing", 00:09:03.170 "vhost_get_controllers", 00:09:03.170 "vhost_delete_controller", 00:09:03.170 "vhost_create_blk_controller", 00:09:03.170 "vhost_scsi_controller_remove_target", 00:09:03.170 "vhost_scsi_controller_add_target", 00:09:03.170 "vhost_start_scsi_controller", 00:09:03.170 "vhost_create_scsi_controller", 00:09:03.170 "ublk_recover_disk", 00:09:03.170 "ublk_get_disks", 00:09:03.170 "ublk_stop_disk", 00:09:03.170 "ublk_start_disk", 00:09:03.170 "ublk_destroy_target", 00:09:03.170 "ublk_create_target", 00:09:03.170 "nbd_get_disks", 00:09:03.170 "nbd_stop_disk", 00:09:03.170 "nbd_start_disk", 00:09:03.170 "env_dpdk_get_mem_stats", 00:09:03.170 "nvmf_subsystem_get_listeners", 00:09:03.170 "nvmf_subsystem_get_qpairs", 00:09:03.170 "nvmf_subsystem_get_controllers", 00:09:03.170 "nvmf_get_stats", 00:09:03.170 "nvmf_get_transports", 00:09:03.170 "nvmf_create_transport", 00:09:03.170 "nvmf_get_targets", 00:09:03.170 "nvmf_delete_target", 00:09:03.170 "nvmf_create_target", 00:09:03.170 "nvmf_subsystem_allow_any_host", 00:09:03.170 "nvmf_subsystem_remove_host", 00:09:03.170 "nvmf_subsystem_add_host", 00:09:03.170 "nvmf_subsystem_remove_ns", 00:09:03.170 "nvmf_subsystem_add_ns", 00:09:03.170 "nvmf_subsystem_listener_set_ana_state", 00:09:03.170 "nvmf_discovery_get_referrals", 00:09:03.170 "nvmf_discovery_remove_referral", 00:09:03.170 "nvmf_discovery_add_referral", 00:09:03.170 "nvmf_subsystem_remove_listener", 00:09:03.170 "nvmf_subsystem_add_listener", 00:09:03.170 "nvmf_delete_subsystem", 00:09:03.170 "nvmf_create_subsystem", 00:09:03.170 "nvmf_get_subsystems", 00:09:03.170 "nvmf_set_crdt", 00:09:03.170 "nvmf_set_config", 00:09:03.170 "nvmf_set_max_subsystems", 00:09:03.170 "iscsi_set_options", 00:09:03.170 "iscsi_get_auth_groups", 00:09:03.170 "iscsi_auth_group_remove_secret", 00:09:03.170 "iscsi_auth_group_add_secret", 00:09:03.170 "iscsi_delete_auth_group", 00:09:03.170 "iscsi_create_auth_group", 00:09:03.170 "iscsi_set_discovery_auth", 00:09:03.170 "iscsi_get_options", 00:09:03.170 "iscsi_target_node_request_logout", 00:09:03.170 "iscsi_target_node_set_redirect", 00:09:03.170 "iscsi_target_node_set_auth", 00:09:03.170 "iscsi_target_node_add_lun", 00:09:03.170 "iscsi_get_connections", 00:09:03.170 "iscsi_portal_group_set_auth", 00:09:03.170 "iscsi_start_portal_group", 00:09:03.170 "iscsi_delete_portal_group", 00:09:03.170 "iscsi_create_portal_group", 00:09:03.170 "iscsi_get_portal_groups", 00:09:03.170 "iscsi_delete_target_node", 00:09:03.170 "iscsi_target_node_remove_pg_ig_maps", 00:09:03.170 "iscsi_target_node_add_pg_ig_maps", 00:09:03.170 "iscsi_create_target_node", 00:09:03.170 "iscsi_get_target_nodes", 00:09:03.170 "iscsi_delete_initiator_group", 00:09:03.170 "iscsi_initiator_group_remove_initiators", 00:09:03.170 "iscsi_initiator_group_add_initiators", 00:09:03.170 "iscsi_create_initiator_group", 00:09:03.170 "iscsi_get_initiator_groups", 00:09:03.170 "vfu_virtio_create_scsi_endpoint", 00:09:03.170 "vfu_virtio_scsi_remove_target", 00:09:03.170 "vfu_virtio_scsi_add_target", 00:09:03.170 "vfu_virtio_create_blk_endpoint", 00:09:03.170 "vfu_virtio_delete_endpoint", 00:09:03.170 "iaa_scan_accel_module", 00:09:03.170 "dsa_scan_accel_module", 00:09:03.170 "ioat_scan_accel_module", 00:09:03.170 "accel_error_inject_error", 00:09:03.170 "bdev_iscsi_delete", 00:09:03.170 "bdev_iscsi_create", 00:09:03.170 "bdev_iscsi_set_options", 00:09:03.170 "bdev_virtio_attach_controller", 00:09:03.170 "bdev_virtio_scsi_get_devices", 00:09:03.170 "bdev_virtio_detach_controller", 00:09:03.170 "bdev_virtio_blk_set_hotplug", 00:09:03.170 "bdev_ftl_set_property", 00:09:03.170 "bdev_ftl_get_properties", 00:09:03.170 "bdev_ftl_get_stats", 00:09:03.170 "bdev_ftl_unmap", 00:09:03.170 "bdev_ftl_unload", 00:09:03.170 "bdev_ftl_delete", 00:09:03.170 "bdev_ftl_load", 00:09:03.170 "bdev_ftl_create", 00:09:03.170 "bdev_aio_delete", 00:09:03.170 "bdev_aio_rescan", 00:09:03.170 "bdev_aio_create", 00:09:03.170 "blobfs_create", 00:09:03.170 "blobfs_detect", 00:09:03.170 "blobfs_set_cache_size", 00:09:03.170 "bdev_zone_block_delete", 00:09:03.170 "bdev_zone_block_create", 00:09:03.170 "bdev_delay_delete", 00:09:03.170 "bdev_delay_create", 00:09:03.170 "bdev_delay_update_latency", 00:09:03.170 "bdev_split_delete", 00:09:03.170 "bdev_split_create", 00:09:03.170 "bdev_error_inject_error", 00:09:03.170 "bdev_error_delete", 00:09:03.170 "bdev_error_create", 00:09:03.170 "bdev_raid_set_options", 00:09:03.170 "bdev_raid_remove_base_bdev", 00:09:03.170 "bdev_raid_add_base_bdev", 00:09:03.170 "bdev_raid_delete", 00:09:03.170 "bdev_raid_create", 00:09:03.170 "bdev_raid_get_bdevs", 00:09:03.170 "bdev_lvol_grow_lvstore", 00:09:03.170 "bdev_lvol_get_lvols", 00:09:03.170 "bdev_lvol_get_lvstores", 00:09:03.170 "bdev_lvol_delete", 00:09:03.170 "bdev_lvol_set_read_only", 00:09:03.170 "bdev_lvol_resize", 00:09:03.170 "bdev_lvol_decouple_parent", 00:09:03.170 "bdev_lvol_inflate", 00:09:03.170 "bdev_lvol_rename", 00:09:03.170 "bdev_lvol_clone_bdev", 00:09:03.170 "bdev_lvol_clone", 00:09:03.170 "bdev_lvol_snapshot", 00:09:03.170 "bdev_lvol_create", 00:09:03.170 "bdev_lvol_delete_lvstore", 00:09:03.170 "bdev_lvol_rename_lvstore", 00:09:03.170 "bdev_lvol_create_lvstore", 00:09:03.170 "bdev_passthru_delete", 00:09:03.170 "bdev_passthru_create", 00:09:03.170 "bdev_nvme_cuse_unregister", 00:09:03.170 "bdev_nvme_cuse_register", 00:09:03.170 "bdev_opal_new_user", 00:09:03.170 "bdev_opal_set_lock_state", 00:09:03.170 "bdev_opal_delete", 00:09:03.170 "bdev_opal_get_info", 00:09:03.170 "bdev_opal_create", 00:09:03.170 "bdev_nvme_opal_revert", 00:09:03.170 "bdev_nvme_opal_init", 00:09:03.170 "bdev_nvme_send_cmd", 00:09:03.170 "bdev_nvme_get_path_iostat", 00:09:03.170 "bdev_nvme_get_mdns_discovery_info", 00:09:03.170 "bdev_nvme_stop_mdns_discovery", 00:09:03.170 "bdev_nvme_start_mdns_discovery", 00:09:03.170 "bdev_nvme_set_multipath_policy", 00:09:03.170 "bdev_nvme_set_preferred_path", 00:09:03.170 "bdev_nvme_get_io_paths", 00:09:03.170 "bdev_nvme_remove_error_injection", 00:09:03.170 "bdev_nvme_add_error_injection", 00:09:03.170 "bdev_nvme_get_discovery_info", 00:09:03.170 "bdev_nvme_stop_discovery", 00:09:03.170 "bdev_nvme_start_discovery", 00:09:03.170 "bdev_nvme_get_controller_health_info", 00:09:03.170 "bdev_nvme_disable_controller", 00:09:03.170 "bdev_nvme_enable_controller", 00:09:03.170 "bdev_nvme_reset_controller", 00:09:03.170 "bdev_nvme_get_transport_statistics", 00:09:03.170 "bdev_nvme_apply_firmware", 00:09:03.170 "bdev_nvme_detach_controller", 00:09:03.170 "bdev_nvme_get_controllers", 00:09:03.170 "bdev_nvme_attach_controller", 00:09:03.170 "bdev_nvme_set_hotplug", 00:09:03.170 "bdev_nvme_set_options", 00:09:03.170 "bdev_null_resize", 00:09:03.170 "bdev_null_delete", 00:09:03.170 "bdev_null_create", 00:09:03.170 "bdev_malloc_delete", 00:09:03.170 "bdev_malloc_create" 00:09:03.170 ] 00:09:03.170 10:04:16 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:03.170 10:04:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:03.170 10:04:16 -- common/autotest_common.sh@10 -- # set +x 00:09:03.170 10:04:16 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:03.170 10:04:16 -- spdkcli/tcp.sh@38 -- # killprocess 1150284 00:09:03.170 10:04:16 -- common/autotest_common.sh@926 -- # '[' -z 1150284 ']' 00:09:03.170 10:04:16 -- common/autotest_common.sh@930 -- # kill -0 1150284 00:09:03.170 10:04:16 -- common/autotest_common.sh@931 -- # uname 00:09:03.170 10:04:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:03.170 10:04:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1150284 00:09:03.170 10:04:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:03.170 10:04:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:03.171 10:04:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1150284' 00:09:03.171 killing process with pid 1150284 00:09:03.171 10:04:16 -- common/autotest_common.sh@945 -- # kill 1150284 00:09:03.171 10:04:16 -- common/autotest_common.sh@950 -- # wait 1150284 00:09:03.429 00:09:03.429 real 0m1.544s 00:09:03.429 user 0m2.786s 00:09:03.429 sys 0m0.523s 00:09:03.429 10:04:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.429 10:04:16 -- common/autotest_common.sh@10 -- # set +x 00:09:03.429 ************************************ 00:09:03.429 END TEST spdkcli_tcp 00:09:03.429 ************************************ 00:09:03.429 10:04:16 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:03.429 10:04:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:03.429 10:04:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.429 10:04:16 -- common/autotest_common.sh@10 -- # set +x 00:09:03.429 ************************************ 00:09:03.429 START TEST dpdk_mem_utility 00:09:03.429 ************************************ 00:09:03.429 10:04:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:03.688 * Looking for test storage... 00:09:03.688 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:09:03.688 10:04:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:03.688 10:04:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:09:03.688 10:04:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1150606 00:09:03.688 10:04:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1150606 00:09:03.688 10:04:16 -- common/autotest_common.sh@819 -- # '[' -z 1150606 ']' 00:09:03.688 10:04:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.688 10:04:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:03.688 10:04:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.688 10:04:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:03.688 10:04:16 -- common/autotest_common.sh@10 -- # set +x 00:09:03.688 [2024-04-24 10:04:16.774011] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:03.688 [2024-04-24 10:04:16.774079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150606 ] 00:09:03.688 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.688 [2024-04-24 10:04:16.846641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.688 [2024-04-24 10:04:16.932369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:03.688 [2024-04-24 10:04:16.932494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.623 10:04:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:04.623 10:04:17 -- common/autotest_common.sh@852 -- # return 0 00:09:04.623 10:04:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:04.623 10:04:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:04.623 10:04:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:04.623 10:04:17 -- common/autotest_common.sh@10 -- # set +x 00:09:04.623 { 00:09:04.623 "filename": "/tmp/spdk_mem_dump.txt" 00:09:04.623 } 00:09:04.623 10:04:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:04.623 10:04:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:04.623 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:04.623 1 heaps totaling size 814.000000 MiB 00:09:04.623 size: 814.000000 MiB heap id: 0 00:09:04.623 end heaps---------- 00:09:04.623 8 mempools totaling size 598.116089 MiB 00:09:04.623 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:04.623 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:04.623 size: 84.521057 MiB name: bdev_io_1150606 00:09:04.623 size: 51.011292 MiB name: evtpool_1150606 00:09:04.623 size: 50.003479 MiB name: msgpool_1150606 00:09:04.623 size: 21.763794 MiB name: PDU_Pool 00:09:04.623 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:04.623 size: 0.026123 MiB name: Session_Pool 00:09:04.623 end mempools------- 00:09:04.623 6 memzones totaling size 4.142822 MiB 00:09:04.623 size: 1.000366 MiB name: RG_ring_0_1150606 00:09:04.623 size: 1.000366 MiB name: RG_ring_1_1150606 00:09:04.623 size: 1.000366 MiB name: RG_ring_4_1150606 00:09:04.623 size: 1.000366 MiB name: RG_ring_5_1150606 00:09:04.623 size: 0.125366 MiB name: RG_ring_2_1150606 00:09:04.623 size: 0.015991 MiB name: RG_ring_3_1150606 00:09:04.623 end memzones------- 00:09:04.623 10:04:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:04.623 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:09:04.623 list of free elements. size: 12.519348 MiB 00:09:04.623 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:04.623 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:04.623 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:04.623 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:04.623 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:04.623 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:04.623 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:04.623 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:04.623 element at address: 0x200000200000 with size: 0.841614 MiB 00:09:04.623 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:09:04.623 element at address: 0x20000b200000 with size: 0.490723 MiB 00:09:04.623 element at address: 0x200000800000 with size: 0.487793 MiB 00:09:04.623 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:04.623 element at address: 0x200027e00000 with size: 0.410034 MiB 00:09:04.623 element at address: 0x200003a00000 with size: 0.355530 MiB 00:09:04.623 list of standard malloc elements. size: 199.218079 MiB 00:09:04.623 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:04.623 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:04.623 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:04.623 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:04.623 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:04.623 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:04.623 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:04.623 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:04.623 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:04.623 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:09:04.623 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:09:04.623 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:09:04.623 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:04.623 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:04.623 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:04.623 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:04.623 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:04.623 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:04.623 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:04.623 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:04.623 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:04.623 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:04.623 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:04.623 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:04.623 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:04.623 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:04.623 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:04.623 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:04.623 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:04.623 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:04.623 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:04.623 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:04.623 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:04.623 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:04.623 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:04.623 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:04.623 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:09:04.623 element at address: 0x200027e69040 with size: 0.000183 MiB 00:09:04.623 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:09:04.623 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:04.623 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:04.623 list of memzone associated elements. size: 602.262573 MiB 00:09:04.623 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:04.623 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:04.623 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:04.623 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:04.623 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:04.623 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1150606_0 00:09:04.623 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:04.623 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1150606_0 00:09:04.623 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:04.623 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1150606_0 00:09:04.623 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:04.623 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:04.623 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:04.623 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:04.623 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:04.623 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1150606 00:09:04.623 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:04.623 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1150606 00:09:04.623 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:04.623 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1150606 00:09:04.623 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:04.623 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:04.623 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:04.623 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:04.623 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:04.623 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:04.623 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:04.623 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:04.623 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:04.623 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1150606 00:09:04.623 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:04.624 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1150606 00:09:04.624 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:04.624 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1150606 00:09:04.624 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:04.624 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1150606 00:09:04.624 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:04.624 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1150606 00:09:04.624 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:04.624 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:04.624 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:04.624 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:04.624 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:04.624 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:04.624 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:04.624 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1150606 00:09:04.624 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:04.624 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:04.624 element at address: 0x200027e69100 with size: 0.023743 MiB 00:09:04.624 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:04.624 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:04.624 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1150606 00:09:04.624 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:09:04.624 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:04.624 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:09:04.624 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1150606 00:09:04.624 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:04.624 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1150606 00:09:04.624 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:09:04.624 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:04.624 10:04:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:04.624 10:04:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1150606 00:09:04.624 10:04:17 -- common/autotest_common.sh@926 -- # '[' -z 1150606 ']' 00:09:04.624 10:04:17 -- common/autotest_common.sh@930 -- # kill -0 1150606 00:09:04.624 10:04:17 -- common/autotest_common.sh@931 -- # uname 00:09:04.624 10:04:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:04.624 10:04:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1150606 00:09:04.624 10:04:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:04.624 10:04:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:04.624 10:04:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1150606' 00:09:04.624 killing process with pid 1150606 00:09:04.624 10:04:17 -- common/autotest_common.sh@945 -- # kill 1150606 00:09:04.624 10:04:17 -- common/autotest_common.sh@950 -- # wait 1150606 00:09:04.882 00:09:04.882 real 0m1.385s 00:09:04.882 user 0m1.412s 00:09:04.882 sys 0m0.433s 00:09:04.882 10:04:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.882 10:04:18 -- common/autotest_common.sh@10 -- # set +x 00:09:04.882 ************************************ 00:09:04.882 END TEST dpdk_mem_utility 00:09:04.882 ************************************ 00:09:04.882 10:04:18 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:09:04.882 10:04:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:04.882 10:04:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.882 10:04:18 -- common/autotest_common.sh@10 -- # set +x 00:09:04.882 ************************************ 00:09:04.882 START TEST event 00:09:04.882 ************************************ 00:09:04.882 10:04:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:09:05.141 * Looking for test storage... 00:09:05.141 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:09:05.141 10:04:18 -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:05.141 10:04:18 -- bdev/nbd_common.sh@6 -- # set -e 00:09:05.141 10:04:18 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:05.141 10:04:18 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:05.141 10:04:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:05.141 10:04:18 -- common/autotest_common.sh@10 -- # set +x 00:09:05.141 ************************************ 00:09:05.141 START TEST event_perf 00:09:05.141 ************************************ 00:09:05.141 10:04:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:05.141 Running I/O for 1 seconds...[2024-04-24 10:04:18.227380] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:05.141 [2024-04-24 10:04:18.227472] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150838 ] 00:09:05.141 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.141 [2024-04-24 10:04:18.304349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.141 [2024-04-24 10:04:18.384602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.141 [2024-04-24 10:04:18.384691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.141 [2024-04-24 10:04:18.384769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.141 [2024-04-24 10:04:18.384771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.517 Running I/O for 1 seconds... 00:09:06.517 lcore 0: 186505 00:09:06.517 lcore 1: 186504 00:09:06.517 lcore 2: 186506 00:09:06.517 lcore 3: 186506 00:09:06.517 done. 00:09:06.517 00:09:06.517 real 0m1.251s 00:09:06.517 user 0m4.145s 00:09:06.517 sys 0m0.101s 00:09:06.517 10:04:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.517 10:04:19 -- common/autotest_common.sh@10 -- # set +x 00:09:06.517 ************************************ 00:09:06.517 END TEST event_perf 00:09:06.517 ************************************ 00:09:06.517 10:04:19 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:06.517 10:04:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:06.517 10:04:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.517 10:04:19 -- common/autotest_common.sh@10 -- # set +x 00:09:06.517 ************************************ 00:09:06.517 START TEST event_reactor 00:09:06.517 ************************************ 00:09:06.517 10:04:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:06.517 [2024-04-24 10:04:19.520321] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:06.517 [2024-04-24 10:04:19.520415] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1151037 ] 00:09:06.517 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.517 [2024-04-24 10:04:19.596164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.517 [2024-04-24 10:04:19.676034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.893 test_start 00:09:07.893 oneshot 00:09:07.893 tick 100 00:09:07.893 tick 100 00:09:07.893 tick 250 00:09:07.893 tick 100 00:09:07.893 tick 100 00:09:07.893 tick 100 00:09:07.893 tick 250 00:09:07.893 tick 500 00:09:07.893 tick 100 00:09:07.893 tick 100 00:09:07.893 tick 250 00:09:07.893 tick 100 00:09:07.893 tick 100 00:09:07.893 test_end 00:09:07.893 00:09:07.893 real 0m1.249s 00:09:07.893 user 0m1.147s 00:09:07.893 sys 0m0.096s 00:09:07.893 10:04:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.893 10:04:20 -- common/autotest_common.sh@10 -- # set +x 00:09:07.893 ************************************ 00:09:07.893 END TEST event_reactor 00:09:07.893 ************************************ 00:09:07.893 10:04:20 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:07.893 10:04:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:07.893 10:04:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.893 10:04:20 -- common/autotest_common.sh@10 -- # set +x 00:09:07.893 ************************************ 00:09:07.893 START TEST event_reactor_perf 00:09:07.893 ************************************ 00:09:07.893 10:04:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:07.893 [2024-04-24 10:04:20.813038] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:07.893 [2024-04-24 10:04:20.813158] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1151241 ] 00:09:07.893 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.893 [2024-04-24 10:04:20.887634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.893 [2024-04-24 10:04:20.965943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.829 test_start 00:09:08.829 test_end 00:09:08.829 Performance: 864162 events per second 00:09:08.829 00:09:08.829 real 0m1.242s 00:09:08.829 user 0m1.146s 00:09:08.829 sys 0m0.091s 00:09:08.829 10:04:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.829 10:04:22 -- common/autotest_common.sh@10 -- # set +x 00:09:08.829 ************************************ 00:09:08.829 END TEST event_reactor_perf 00:09:08.829 ************************************ 00:09:08.829 10:04:22 -- event/event.sh@49 -- # uname -s 00:09:08.829 10:04:22 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:08.829 10:04:22 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:08.829 10:04:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:08.829 10:04:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.829 10:04:22 -- common/autotest_common.sh@10 -- # set +x 00:09:08.829 ************************************ 00:09:08.829 START TEST event_scheduler 00:09:08.829 ************************************ 00:09:08.829 10:04:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:09.088 * Looking for test storage... 00:09:09.088 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:09:09.088 10:04:22 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:09.088 10:04:22 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1151461 00:09:09.088 10:04:22 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:09.088 10:04:22 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:09.088 10:04:22 -- scheduler/scheduler.sh@37 -- # waitforlisten 1151461 00:09:09.088 10:04:22 -- common/autotest_common.sh@819 -- # '[' -z 1151461 ']' 00:09:09.088 10:04:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.088 10:04:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:09.088 10:04:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.088 10:04:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:09.088 10:04:22 -- common/autotest_common.sh@10 -- # set +x 00:09:09.088 [2024-04-24 10:04:22.198497] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:09.088 [2024-04-24 10:04:22.198592] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1151461 ] 00:09:09.088 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.088 [2024-04-24 10:04:22.271193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.088 [2024-04-24 10:04:22.352408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.088 [2024-04-24 10:04:22.352495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.088 [2024-04-24 10:04:22.352570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.088 [2024-04-24 10:04:22.352572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.019 10:04:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:10.019 10:04:23 -- common/autotest_common.sh@852 -- # return 0 00:09:10.019 10:04:23 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:10.019 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.019 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.019 POWER: Env isn't set yet! 00:09:10.019 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:10.019 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:10.020 POWER: Cannot set governor of lcore 0 to userspace 00:09:10.020 POWER: Attempting to initialise PSTAT power management... 00:09:10.020 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:09:10.020 POWER: Initialized successfully for lcore 0 power management 00:09:10.020 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:09:10.020 POWER: Initialized successfully for lcore 1 power management 00:09:10.020 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:09:10.020 POWER: Initialized successfully for lcore 2 power management 00:09:10.020 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:09:10.020 POWER: Initialized successfully for lcore 3 power management 00:09:10.020 10:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:10.020 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.020 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.020 [2024-04-24 10:04:23.144778] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:10.020 10:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:10.020 10:04:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:10.020 10:04:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.020 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.020 ************************************ 00:09:10.020 START TEST scheduler_create_thread 00:09:10.020 ************************************ 00:09:10.020 10:04:23 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:10.020 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.020 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.020 2 00:09:10.020 10:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:10.020 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.020 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.020 3 00:09:10.020 10:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:10.020 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.020 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.020 4 00:09:10.020 10:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:10.020 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.020 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.020 5 00:09:10.020 10:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:10.020 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.020 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.020 6 00:09:10.020 10:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:10.020 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.020 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.020 7 00:09:10.020 10:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:10.020 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.020 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.020 8 00:09:10.020 10:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:10.020 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.020 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.020 9 00:09:10.020 10:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:10.020 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.020 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.020 10 00:09:10.020 10:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:10.020 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.020 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.020 10:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:10.020 10:04:23 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:10.020 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.020 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.954 10:04:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.954 10:04:24 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:10.954 10:04:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.954 10:04:24 -- common/autotest_common.sh@10 -- # set +x 00:09:12.328 10:04:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.328 10:04:25 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:12.328 10:04:25 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:12.328 10:04:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.328 10:04:25 -- common/autotest_common.sh@10 -- # set +x 00:09:13.260 10:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.260 00:09:13.260 real 0m3.382s 00:09:13.260 user 0m0.021s 00:09:13.260 sys 0m0.009s 00:09:13.260 10:04:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.260 10:04:26 -- common/autotest_common.sh@10 -- # set +x 00:09:13.260 ************************************ 00:09:13.260 END TEST scheduler_create_thread 00:09:13.260 ************************************ 00:09:13.518 10:04:26 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:13.518 10:04:26 -- scheduler/scheduler.sh@46 -- # killprocess 1151461 00:09:13.518 10:04:26 -- common/autotest_common.sh@926 -- # '[' -z 1151461 ']' 00:09:13.518 10:04:26 -- common/autotest_common.sh@930 -- # kill -0 1151461 00:09:13.518 10:04:26 -- common/autotest_common.sh@931 -- # uname 00:09:13.518 10:04:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:13.518 10:04:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1151461 00:09:13.518 10:04:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:13.518 10:04:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:13.518 10:04:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1151461' 00:09:13.518 killing process with pid 1151461 00:09:13.518 10:04:26 -- common/autotest_common.sh@945 -- # kill 1151461 00:09:13.518 10:04:26 -- common/autotest_common.sh@950 -- # wait 1151461 00:09:13.803 [2024-04-24 10:04:26.916911] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:13.803 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:09:13.803 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:09:13.803 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:09:13.803 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:09:13.803 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:09:13.803 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:09:13.803 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:09:13.803 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:09:14.073 00:09:14.073 real 0m5.079s 00:09:14.073 user 0m10.470s 00:09:14.073 sys 0m0.406s 00:09:14.073 10:04:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.073 10:04:27 -- common/autotest_common.sh@10 -- # set +x 00:09:14.073 ************************************ 00:09:14.073 END TEST event_scheduler 00:09:14.073 ************************************ 00:09:14.073 10:04:27 -- event/event.sh@51 -- # modprobe -n nbd 00:09:14.073 10:04:27 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:14.073 10:04:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:14.073 10:04:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.073 10:04:27 -- common/autotest_common.sh@10 -- # set +x 00:09:14.073 ************************************ 00:09:14.073 START TEST app_repeat 00:09:14.073 ************************************ 00:09:14.073 10:04:27 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:09:14.073 10:04:27 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.073 10:04:27 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.073 10:04:27 -- event/event.sh@13 -- # local nbd_list 00:09:14.073 10:04:27 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.073 10:04:27 -- event/event.sh@14 -- # local bdev_list 00:09:14.073 10:04:27 -- event/event.sh@15 -- # local repeat_times=4 00:09:14.073 10:04:27 -- event/event.sh@17 -- # modprobe nbd 00:09:14.073 10:04:27 -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:14.073 10:04:27 -- event/event.sh@19 -- # repeat_pid=1152182 00:09:14.073 10:04:27 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:14.073 10:04:27 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1152182' 00:09:14.073 Process app_repeat pid: 1152182 00:09:14.073 10:04:27 -- event/event.sh@23 -- # for i in {0..2} 00:09:14.073 10:04:27 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:14.073 spdk_app_start Round 0 00:09:14.073 10:04:27 -- event/event.sh@25 -- # waitforlisten 1152182 /var/tmp/spdk-nbd.sock 00:09:14.073 10:04:27 -- common/autotest_common.sh@819 -- # '[' -z 1152182 ']' 00:09:14.073 10:04:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:14.073 10:04:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:14.073 10:04:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:14.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:14.073 10:04:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:14.073 10:04:27 -- common/autotest_common.sh@10 -- # set +x 00:09:14.073 [2024-04-24 10:04:27.231091] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:14.073 [2024-04-24 10:04:27.231148] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152182 ] 00:09:14.073 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.073 [2024-04-24 10:04:27.304189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:14.331 [2024-04-24 10:04:27.397760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.331 [2024-04-24 10:04:27.397764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.897 10:04:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:14.897 10:04:28 -- common/autotest_common.sh@852 -- # return 0 00:09:14.897 10:04:28 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:15.156 Malloc0 00:09:15.156 10:04:28 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:15.156 Malloc1 00:09:15.415 10:04:28 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@12 -- # local i 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:15.415 /dev/nbd0 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:15.415 10:04:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:15.415 10:04:28 -- common/autotest_common.sh@857 -- # local i 00:09:15.415 10:04:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:15.415 10:04:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:15.415 10:04:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:15.415 10:04:28 -- common/autotest_common.sh@861 -- # break 00:09:15.415 10:04:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:15.415 10:04:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:15.415 10:04:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:15.415 1+0 records in 00:09:15.415 1+0 records out 00:09:15.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241209 s, 17.0 MB/s 00:09:15.415 10:04:28 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:15.415 10:04:28 -- common/autotest_common.sh@874 -- # size=4096 00:09:15.415 10:04:28 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:15.415 10:04:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:15.415 10:04:28 -- common/autotest_common.sh@877 -- # return 0 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.415 10:04:28 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:15.674 /dev/nbd1 00:09:15.674 10:04:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:15.674 10:04:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:15.674 10:04:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:15.674 10:04:28 -- common/autotest_common.sh@857 -- # local i 00:09:15.674 10:04:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:15.674 10:04:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:15.674 10:04:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:15.674 10:04:28 -- common/autotest_common.sh@861 -- # break 00:09:15.674 10:04:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:15.674 10:04:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:15.674 10:04:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:15.674 1+0 records in 00:09:15.674 1+0 records out 00:09:15.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228962 s, 17.9 MB/s 00:09:15.674 10:04:28 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:15.674 10:04:28 -- common/autotest_common.sh@874 -- # size=4096 00:09:15.674 10:04:28 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:15.674 10:04:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:15.674 10:04:28 -- common/autotest_common.sh@877 -- # return 0 00:09:15.674 10:04:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.674 10:04:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.674 10:04:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.674 10:04:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.674 10:04:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:15.933 { 00:09:15.933 "nbd_device": "/dev/nbd0", 00:09:15.933 "bdev_name": "Malloc0" 00:09:15.933 }, 00:09:15.933 { 00:09:15.933 "nbd_device": "/dev/nbd1", 00:09:15.933 "bdev_name": "Malloc1" 00:09:15.933 } 00:09:15.933 ]' 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:15.933 { 00:09:15.933 "nbd_device": "/dev/nbd0", 00:09:15.933 "bdev_name": "Malloc0" 00:09:15.933 }, 00:09:15.933 { 00:09:15.933 "nbd_device": "/dev/nbd1", 00:09:15.933 "bdev_name": "Malloc1" 00:09:15.933 } 00:09:15.933 ]' 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:15.933 /dev/nbd1' 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:15.933 /dev/nbd1' 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@65 -- # count=2 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@95 -- # count=2 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:15.933 10:04:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:15.934 256+0 records in 00:09:15.934 256+0 records out 00:09:15.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104372 s, 100 MB/s 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:15.934 256+0 records in 00:09:15.934 256+0 records out 00:09:15.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206351 s, 50.8 MB/s 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:15.934 256+0 records in 00:09:15.934 256+0 records out 00:09:15.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220239 s, 47.6 MB/s 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@51 -- # local i 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.934 10:04:29 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:16.192 10:04:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:16.192 10:04:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:16.192 10:04:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:16.192 10:04:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.192 10:04:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.192 10:04:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:16.192 10:04:29 -- bdev/nbd_common.sh@41 -- # break 00:09:16.192 10:04:29 -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.192 10:04:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.192 10:04:29 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:16.451 10:04:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:16.451 10:04:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:16.451 10:04:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:16.451 10:04:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.451 10:04:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.451 10:04:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:16.451 10:04:29 -- bdev/nbd_common.sh@41 -- # break 00:09:16.451 10:04:29 -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.451 10:04:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:16.451 10:04:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.451 10:04:29 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:16.710 10:04:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:16.710 10:04:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:16.710 10:04:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:16.710 10:04:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:16.710 10:04:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:16.710 10:04:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:16.710 10:04:29 -- bdev/nbd_common.sh@65 -- # true 00:09:16.710 10:04:29 -- bdev/nbd_common.sh@65 -- # count=0 00:09:16.710 10:04:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:16.710 10:04:29 -- bdev/nbd_common.sh@104 -- # count=0 00:09:16.710 10:04:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:16.710 10:04:29 -- bdev/nbd_common.sh@109 -- # return 0 00:09:16.710 10:04:29 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:16.968 10:04:29 -- event/event.sh@35 -- # sleep 3 00:09:16.968 [2024-04-24 10:04:30.195120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:17.226 [2024-04-24 10:04:30.278187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.226 [2024-04-24 10:04:30.278189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.226 [2024-04-24 10:04:30.324012] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:17.226 [2024-04-24 10:04:30.324057] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:19.758 10:04:32 -- event/event.sh@23 -- # for i in {0..2} 00:09:19.758 10:04:32 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:19.758 spdk_app_start Round 1 00:09:19.758 10:04:32 -- event/event.sh@25 -- # waitforlisten 1152182 /var/tmp/spdk-nbd.sock 00:09:19.758 10:04:32 -- common/autotest_common.sh@819 -- # '[' -z 1152182 ']' 00:09:19.758 10:04:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:19.758 10:04:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:19.758 10:04:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:19.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:19.758 10:04:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:19.758 10:04:32 -- common/autotest_common.sh@10 -- # set +x 00:09:20.016 10:04:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:20.016 10:04:33 -- common/autotest_common.sh@852 -- # return 0 00:09:20.016 10:04:33 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:20.274 Malloc0 00:09:20.274 10:04:33 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:20.274 Malloc1 00:09:20.274 10:04:33 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@12 -- # local i 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:20.274 10:04:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:20.532 /dev/nbd0 00:09:20.532 10:04:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:20.532 10:04:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:20.532 10:04:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:20.532 10:04:33 -- common/autotest_common.sh@857 -- # local i 00:09:20.532 10:04:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:20.532 10:04:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:20.532 10:04:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:20.532 10:04:33 -- common/autotest_common.sh@861 -- # break 00:09:20.532 10:04:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:20.532 10:04:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:20.532 10:04:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:20.532 1+0 records in 00:09:20.532 1+0 records out 00:09:20.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231411 s, 17.7 MB/s 00:09:20.532 10:04:33 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:20.532 10:04:33 -- common/autotest_common.sh@874 -- # size=4096 00:09:20.532 10:04:33 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:20.532 10:04:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:20.532 10:04:33 -- common/autotest_common.sh@877 -- # return 0 00:09:20.532 10:04:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:20.532 10:04:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:20.532 10:04:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:20.790 /dev/nbd1 00:09:20.790 10:04:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:20.790 10:04:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:20.790 10:04:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:20.790 10:04:33 -- common/autotest_common.sh@857 -- # local i 00:09:20.790 10:04:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:20.790 10:04:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:20.790 10:04:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:20.790 10:04:33 -- common/autotest_common.sh@861 -- # break 00:09:20.790 10:04:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:20.790 10:04:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:20.790 10:04:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:20.790 1+0 records in 00:09:20.790 1+0 records out 00:09:20.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024744 s, 16.6 MB/s 00:09:20.790 10:04:33 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:20.790 10:04:33 -- common/autotest_common.sh@874 -- # size=4096 00:09:20.790 10:04:33 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:20.790 10:04:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:20.790 10:04:33 -- common/autotest_common.sh@877 -- # return 0 00:09:20.790 10:04:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:20.790 10:04:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:20.790 10:04:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:20.790 10:04:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.790 10:04:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:21.048 { 00:09:21.048 "nbd_device": "/dev/nbd0", 00:09:21.048 "bdev_name": "Malloc0" 00:09:21.048 }, 00:09:21.048 { 00:09:21.048 "nbd_device": "/dev/nbd1", 00:09:21.048 "bdev_name": "Malloc1" 00:09:21.048 } 00:09:21.048 ]' 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:21.048 { 00:09:21.048 "nbd_device": "/dev/nbd0", 00:09:21.048 "bdev_name": "Malloc0" 00:09:21.048 }, 00:09:21.048 { 00:09:21.048 "nbd_device": "/dev/nbd1", 00:09:21.048 "bdev_name": "Malloc1" 00:09:21.048 } 00:09:21.048 ]' 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:21.048 /dev/nbd1' 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:21.048 /dev/nbd1' 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@65 -- # count=2 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@95 -- # count=2 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:21.048 256+0 records in 00:09:21.048 256+0 records out 00:09:21.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104591 s, 100 MB/s 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:21.048 256+0 records in 00:09:21.048 256+0 records out 00:09:21.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204605 s, 51.2 MB/s 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:21.048 256+0 records in 00:09:21.048 256+0 records out 00:09:21.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225829 s, 46.4 MB/s 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@51 -- # local i 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.048 10:04:34 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:21.307 10:04:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:21.307 10:04:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:21.307 10:04:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:21.307 10:04:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.307 10:04:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.307 10:04:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:21.307 10:04:34 -- bdev/nbd_common.sh@41 -- # break 00:09:21.307 10:04:34 -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.307 10:04:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.307 10:04:34 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@41 -- # break 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@65 -- # true 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@65 -- # count=0 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@104 -- # count=0 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:21.565 10:04:34 -- bdev/nbd_common.sh@109 -- # return 0 00:09:21.565 10:04:34 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:21.823 10:04:35 -- event/event.sh@35 -- # sleep 3 00:09:22.080 [2024-04-24 10:04:35.219971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:22.081 [2024-04-24 10:04:35.297747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.081 [2024-04-24 10:04:35.297749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.081 [2024-04-24 10:04:35.343406] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:22.081 [2024-04-24 10:04:35.343461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:25.361 10:04:38 -- event/event.sh@23 -- # for i in {0..2} 00:09:25.361 10:04:38 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:25.361 spdk_app_start Round 2 00:09:25.361 10:04:38 -- event/event.sh@25 -- # waitforlisten 1152182 /var/tmp/spdk-nbd.sock 00:09:25.361 10:04:38 -- common/autotest_common.sh@819 -- # '[' -z 1152182 ']' 00:09:25.361 10:04:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:25.361 10:04:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:25.361 10:04:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:25.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:25.361 10:04:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:25.361 10:04:38 -- common/autotest_common.sh@10 -- # set +x 00:09:25.361 10:04:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:25.361 10:04:38 -- common/autotest_common.sh@852 -- # return 0 00:09:25.361 10:04:38 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:25.361 Malloc0 00:09:25.361 10:04:38 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:25.361 Malloc1 00:09:25.361 10:04:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@12 -- # local i 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:25.361 10:04:38 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:25.618 /dev/nbd0 00:09:25.618 10:04:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:25.618 10:04:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:25.618 10:04:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:25.618 10:04:38 -- common/autotest_common.sh@857 -- # local i 00:09:25.618 10:04:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:25.618 10:04:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:25.618 10:04:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:25.618 10:04:38 -- common/autotest_common.sh@861 -- # break 00:09:25.618 10:04:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:25.618 10:04:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:25.618 10:04:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:25.618 1+0 records in 00:09:25.618 1+0 records out 00:09:25.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225593 s, 18.2 MB/s 00:09:25.618 10:04:38 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:25.618 10:04:38 -- common/autotest_common.sh@874 -- # size=4096 00:09:25.618 10:04:38 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:25.618 10:04:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:25.618 10:04:38 -- common/autotest_common.sh@877 -- # return 0 00:09:25.618 10:04:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:25.618 10:04:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:25.618 10:04:38 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:25.618 /dev/nbd1 00:09:25.877 10:04:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:25.877 10:04:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:25.877 10:04:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:25.877 10:04:38 -- common/autotest_common.sh@857 -- # local i 00:09:25.877 10:04:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:25.877 10:04:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:25.877 10:04:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:25.877 10:04:38 -- common/autotest_common.sh@861 -- # break 00:09:25.877 10:04:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:25.877 10:04:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:25.877 10:04:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:25.877 1+0 records in 00:09:25.877 1+0 records out 00:09:25.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234658 s, 17.5 MB/s 00:09:25.877 10:04:38 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:25.877 10:04:38 -- common/autotest_common.sh@874 -- # size=4096 00:09:25.877 10:04:38 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:25.877 10:04:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:25.877 10:04:38 -- common/autotest_common.sh@877 -- # return 0 00:09:25.877 10:04:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:25.877 10:04:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:25.877 10:04:38 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:25.877 10:04:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.877 10:04:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:25.877 10:04:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:25.877 { 00:09:25.877 "nbd_device": "/dev/nbd0", 00:09:25.877 "bdev_name": "Malloc0" 00:09:25.877 }, 00:09:25.877 { 00:09:25.877 "nbd_device": "/dev/nbd1", 00:09:25.877 "bdev_name": "Malloc1" 00:09:25.877 } 00:09:25.877 ]' 00:09:25.877 10:04:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:25.877 { 00:09:25.877 "nbd_device": "/dev/nbd0", 00:09:25.877 "bdev_name": "Malloc0" 00:09:25.877 }, 00:09:25.877 { 00:09:25.877 "nbd_device": "/dev/nbd1", 00:09:25.877 "bdev_name": "Malloc1" 00:09:25.877 } 00:09:25.877 ]' 00:09:25.877 10:04:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:25.877 10:04:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:25.877 /dev/nbd1' 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:26.136 /dev/nbd1' 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@65 -- # count=2 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@95 -- # count=2 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:26.136 256+0 records in 00:09:26.136 256+0 records out 00:09:26.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049844 s, 210 MB/s 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:26.136 256+0 records in 00:09:26.136 256+0 records out 00:09:26.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211162 s, 49.7 MB/s 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:26.136 256+0 records in 00:09:26.136 256+0 records out 00:09:26.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221378 s, 47.4 MB/s 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@51 -- # local i 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.136 10:04:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:26.395 10:04:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:26.395 10:04:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:26.395 10:04:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:26.395 10:04:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.395 10:04:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@41 -- # break 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@41 -- # break 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.396 10:04:39 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:26.655 10:04:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:26.655 10:04:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:26.655 10:04:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:26.655 10:04:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:26.655 10:04:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:26.655 10:04:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:26.655 10:04:39 -- bdev/nbd_common.sh@65 -- # true 00:09:26.655 10:04:39 -- bdev/nbd_common.sh@65 -- # count=0 00:09:26.655 10:04:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:26.655 10:04:39 -- bdev/nbd_common.sh@104 -- # count=0 00:09:26.655 10:04:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:26.655 10:04:39 -- bdev/nbd_common.sh@109 -- # return 0 00:09:26.655 10:04:39 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:26.913 10:04:40 -- event/event.sh@35 -- # sleep 3 00:09:27.172 [2024-04-24 10:04:40.258309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.172 [2024-04-24 10:04:40.339742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.172 [2024-04-24 10:04:40.339745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.172 [2024-04-24 10:04:40.386209] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:27.172 [2024-04-24 10:04:40.386263] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:30.459 10:04:43 -- event/event.sh@38 -- # waitforlisten 1152182 /var/tmp/spdk-nbd.sock 00:09:30.459 10:04:43 -- common/autotest_common.sh@819 -- # '[' -z 1152182 ']' 00:09:30.459 10:04:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:30.459 10:04:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:30.459 10:04:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:30.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:30.459 10:04:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:30.459 10:04:43 -- common/autotest_common.sh@10 -- # set +x 00:09:30.459 10:04:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:30.459 10:04:43 -- common/autotest_common.sh@852 -- # return 0 00:09:30.459 10:04:43 -- event/event.sh@39 -- # killprocess 1152182 00:09:30.460 10:04:43 -- common/autotest_common.sh@926 -- # '[' -z 1152182 ']' 00:09:30.460 10:04:43 -- common/autotest_common.sh@930 -- # kill -0 1152182 00:09:30.460 10:04:43 -- common/autotest_common.sh@931 -- # uname 00:09:30.460 10:04:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:30.460 10:04:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1152182 00:09:30.460 10:04:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:30.460 10:04:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:30.460 10:04:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1152182' 00:09:30.460 killing process with pid 1152182 00:09:30.460 10:04:43 -- common/autotest_common.sh@945 -- # kill 1152182 00:09:30.460 10:04:43 -- common/autotest_common.sh@950 -- # wait 1152182 00:09:30.460 spdk_app_start is called in Round 0. 00:09:30.460 Shutdown signal received, stop current app iteration 00:09:30.460 Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 reinitialization... 00:09:30.460 spdk_app_start is called in Round 1. 00:09:30.460 Shutdown signal received, stop current app iteration 00:09:30.460 Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 reinitialization... 00:09:30.460 spdk_app_start is called in Round 2. 00:09:30.460 Shutdown signal received, stop current app iteration 00:09:30.460 Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 reinitialization... 00:09:30.460 spdk_app_start is called in Round 3. 00:09:30.460 Shutdown signal received, stop current app iteration 00:09:30.460 10:04:43 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:30.460 10:04:43 -- event/event.sh@42 -- # return 0 00:09:30.460 00:09:30.460 real 0m16.256s 00:09:30.460 user 0m34.282s 00:09:30.460 sys 0m3.150s 00:09:30.460 10:04:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.460 10:04:43 -- common/autotest_common.sh@10 -- # set +x 00:09:30.460 ************************************ 00:09:30.460 END TEST app_repeat 00:09:30.460 ************************************ 00:09:30.460 10:04:43 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:30.460 10:04:43 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:30.460 10:04:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:30.460 10:04:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:30.460 10:04:43 -- common/autotest_common.sh@10 -- # set +x 00:09:30.460 ************************************ 00:09:30.460 START TEST cpu_locks 00:09:30.460 ************************************ 00:09:30.460 10:04:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:30.460 * Looking for test storage... 00:09:30.460 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:09:30.460 10:04:43 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:30.460 10:04:43 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:30.460 10:04:43 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:30.460 10:04:43 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:30.460 10:04:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:30.460 10:04:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:30.460 10:04:43 -- common/autotest_common.sh@10 -- # set +x 00:09:30.460 ************************************ 00:09:30.460 START TEST default_locks 00:09:30.460 ************************************ 00:09:30.460 10:04:43 -- common/autotest_common.sh@1104 -- # default_locks 00:09:30.460 10:04:43 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1154609 00:09:30.460 10:04:43 -- event/cpu_locks.sh@47 -- # waitforlisten 1154609 00:09:30.460 10:04:43 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:30.460 10:04:43 -- common/autotest_common.sh@819 -- # '[' -z 1154609 ']' 00:09:30.460 10:04:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.460 10:04:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:30.460 10:04:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.460 10:04:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:30.460 10:04:43 -- common/autotest_common.sh@10 -- # set +x 00:09:30.460 [2024-04-24 10:04:43.644213] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:30.460 [2024-04-24 10:04:43.644306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154609 ] 00:09:30.460 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.460 [2024-04-24 10:04:43.719894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.719 [2024-04-24 10:04:43.800635] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:30.719 [2024-04-24 10:04:43.800779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.285 10:04:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:31.285 10:04:44 -- common/autotest_common.sh@852 -- # return 0 00:09:31.285 10:04:44 -- event/cpu_locks.sh@49 -- # locks_exist 1154609 00:09:31.285 10:04:44 -- event/cpu_locks.sh@22 -- # lslocks -p 1154609 00:09:31.285 10:04:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:31.544 lslocks: write error 00:09:31.544 10:04:44 -- event/cpu_locks.sh@50 -- # killprocess 1154609 00:09:31.544 10:04:44 -- common/autotest_common.sh@926 -- # '[' -z 1154609 ']' 00:09:31.544 10:04:44 -- common/autotest_common.sh@930 -- # kill -0 1154609 00:09:31.544 10:04:44 -- common/autotest_common.sh@931 -- # uname 00:09:31.544 10:04:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:31.544 10:04:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1154609 00:09:31.544 10:04:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:31.544 10:04:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:31.544 10:04:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1154609' 00:09:31.544 killing process with pid 1154609 00:09:31.544 10:04:44 -- common/autotest_common.sh@945 -- # kill 1154609 00:09:31.544 10:04:44 -- common/autotest_common.sh@950 -- # wait 1154609 00:09:32.112 10:04:45 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1154609 00:09:32.112 10:04:45 -- common/autotest_common.sh@640 -- # local es=0 00:09:32.112 10:04:45 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1154609 00:09:32.112 10:04:45 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:32.112 10:04:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:32.112 10:04:45 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:32.112 10:04:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:32.112 10:04:45 -- common/autotest_common.sh@643 -- # waitforlisten 1154609 00:09:32.112 10:04:45 -- common/autotest_common.sh@819 -- # '[' -z 1154609 ']' 00:09:32.112 10:04:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.112 10:04:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:32.112 10:04:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.112 10:04:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:32.112 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1154609) - No such process 00:09:32.112 ERROR: process (pid: 1154609) is no longer running 00:09:32.112 10:04:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:32.112 10:04:45 -- common/autotest_common.sh@852 -- # return 1 00:09:32.112 10:04:45 -- common/autotest_common.sh@643 -- # es=1 00:09:32.112 10:04:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:32.112 10:04:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:32.112 10:04:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:32.112 10:04:45 -- event/cpu_locks.sh@54 -- # no_locks 00:09:32.112 10:04:45 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:32.112 10:04:45 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:32.112 10:04:45 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:32.112 00:09:32.112 real 0m1.520s 00:09:32.112 user 0m1.564s 00:09:32.112 sys 0m0.542s 00:09:32.112 10:04:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.112 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 ************************************ 00:09:32.112 END TEST default_locks 00:09:32.112 ************************************ 00:09:32.112 10:04:45 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:32.112 10:04:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:32.112 10:04:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:32.112 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 ************************************ 00:09:32.112 START TEST default_locks_via_rpc 00:09:32.112 ************************************ 00:09:32.112 10:04:45 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:09:32.112 10:04:45 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1154823 00:09:32.112 10:04:45 -- event/cpu_locks.sh@63 -- # waitforlisten 1154823 00:09:32.112 10:04:45 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:32.112 10:04:45 -- common/autotest_common.sh@819 -- # '[' -z 1154823 ']' 00:09:32.112 10:04:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.112 10:04:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:32.112 10:04:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.112 10:04:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:32.112 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 [2024-04-24 10:04:45.216457] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:32.112 [2024-04-24 10:04:45.216549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154823 ] 00:09:32.112 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.112 [2024-04-24 10:04:45.293568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.112 [2024-04-24 10:04:45.374757] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:32.112 [2024-04-24 10:04:45.374897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.068 10:04:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:33.068 10:04:46 -- common/autotest_common.sh@852 -- # return 0 00:09:33.068 10:04:46 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:33.068 10:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:33.068 10:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:33.068 10:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:33.068 10:04:46 -- event/cpu_locks.sh@67 -- # no_locks 00:09:33.068 10:04:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:33.068 10:04:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:33.068 10:04:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:33.068 10:04:46 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:33.068 10:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:33.068 10:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:33.068 10:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:33.068 10:04:46 -- event/cpu_locks.sh@71 -- # locks_exist 1154823 00:09:33.068 10:04:46 -- event/cpu_locks.sh@22 -- # lslocks -p 1154823 00:09:33.068 10:04:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:33.326 10:04:46 -- event/cpu_locks.sh@73 -- # killprocess 1154823 00:09:33.326 10:04:46 -- common/autotest_common.sh@926 -- # '[' -z 1154823 ']' 00:09:33.326 10:04:46 -- common/autotest_common.sh@930 -- # kill -0 1154823 00:09:33.326 10:04:46 -- common/autotest_common.sh@931 -- # uname 00:09:33.326 10:04:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:33.326 10:04:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1154823 00:09:33.326 10:04:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:33.326 10:04:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:33.326 10:04:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1154823' 00:09:33.326 killing process with pid 1154823 00:09:33.326 10:04:46 -- common/autotest_common.sh@945 -- # kill 1154823 00:09:33.326 10:04:46 -- common/autotest_common.sh@950 -- # wait 1154823 00:09:33.583 00:09:33.583 real 0m1.594s 00:09:33.583 user 0m1.653s 00:09:33.583 sys 0m0.529s 00:09:33.583 10:04:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.583 10:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:33.583 ************************************ 00:09:33.583 END TEST default_locks_via_rpc 00:09:33.583 ************************************ 00:09:33.583 10:04:46 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:33.583 10:04:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:33.583 10:04:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:33.583 10:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:33.583 ************************************ 00:09:33.583 START TEST non_locking_app_on_locked_coremask 00:09:33.583 ************************************ 00:09:33.583 10:04:46 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:09:33.583 10:04:46 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1155029 00:09:33.583 10:04:46 -- event/cpu_locks.sh@81 -- # waitforlisten 1155029 /var/tmp/spdk.sock 00:09:33.583 10:04:46 -- common/autotest_common.sh@819 -- # '[' -z 1155029 ']' 00:09:33.583 10:04:46 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:33.583 10:04:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.583 10:04:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:33.583 10:04:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.583 10:04:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:33.583 10:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:33.583 [2024-04-24 10:04:46.839838] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:33.583 [2024-04-24 10:04:46.839896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155029 ] 00:09:33.841 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.841 [2024-04-24 10:04:46.911232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.841 [2024-04-24 10:04:47.004331] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:33.841 [2024-04-24 10:04:47.004450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.407 10:04:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:34.407 10:04:47 -- common/autotest_common.sh@852 -- # return 0 00:09:34.407 10:04:47 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1155209 00:09:34.408 10:04:47 -- event/cpu_locks.sh@85 -- # waitforlisten 1155209 /var/tmp/spdk2.sock 00:09:34.408 10:04:47 -- common/autotest_common.sh@819 -- # '[' -z 1155209 ']' 00:09:34.408 10:04:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:34.408 10:04:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:34.408 10:04:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:34.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:34.408 10:04:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:34.408 10:04:47 -- common/autotest_common.sh@10 -- # set +x 00:09:34.408 10:04:47 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:34.408 [2024-04-24 10:04:47.677743] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:34.408 [2024-04-24 10:04:47.677832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155209 ] 00:09:34.665 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.665 [2024-04-24 10:04:47.777503] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:34.665 [2024-04-24 10:04:47.777528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.666 [2024-04-24 10:04:47.937068] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:34.666 [2024-04-24 10:04:47.937227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.232 10:04:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:35.233 10:04:48 -- common/autotest_common.sh@852 -- # return 0 00:09:35.233 10:04:48 -- event/cpu_locks.sh@87 -- # locks_exist 1155029 00:09:35.233 10:04:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:35.233 10:04:48 -- event/cpu_locks.sh@22 -- # lslocks -p 1155029 00:09:36.606 lslocks: write error 00:09:36.606 10:04:49 -- event/cpu_locks.sh@89 -- # killprocess 1155029 00:09:36.606 10:04:49 -- common/autotest_common.sh@926 -- # '[' -z 1155029 ']' 00:09:36.606 10:04:49 -- common/autotest_common.sh@930 -- # kill -0 1155029 00:09:36.606 10:04:49 -- common/autotest_common.sh@931 -- # uname 00:09:36.606 10:04:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:36.606 10:04:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1155029 00:09:36.606 10:04:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:36.606 10:04:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:36.606 10:04:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1155029' 00:09:36.606 killing process with pid 1155029 00:09:36.606 10:04:49 -- common/autotest_common.sh@945 -- # kill 1155029 00:09:36.606 10:04:49 -- common/autotest_common.sh@950 -- # wait 1155029 00:09:37.172 10:04:50 -- event/cpu_locks.sh@90 -- # killprocess 1155209 00:09:37.172 10:04:50 -- common/autotest_common.sh@926 -- # '[' -z 1155209 ']' 00:09:37.172 10:04:50 -- common/autotest_common.sh@930 -- # kill -0 1155209 00:09:37.172 10:04:50 -- common/autotest_common.sh@931 -- # uname 00:09:37.172 10:04:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:37.172 10:04:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1155209 00:09:37.431 10:04:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:37.431 10:04:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:37.431 10:04:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1155209' 00:09:37.431 killing process with pid 1155209 00:09:37.431 10:04:50 -- common/autotest_common.sh@945 -- # kill 1155209 00:09:37.431 10:04:50 -- common/autotest_common.sh@950 -- # wait 1155209 00:09:37.690 00:09:37.690 real 0m3.987s 00:09:37.690 user 0m4.174s 00:09:37.690 sys 0m1.341s 00:09:37.690 10:04:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.690 10:04:50 -- common/autotest_common.sh@10 -- # set +x 00:09:37.690 ************************************ 00:09:37.690 END TEST non_locking_app_on_locked_coremask 00:09:37.690 ************************************ 00:09:37.690 10:04:50 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:37.690 10:04:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:37.690 10:04:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:37.690 10:04:50 -- common/autotest_common.sh@10 -- # set +x 00:09:37.690 ************************************ 00:09:37.690 START TEST locking_app_on_unlocked_coremask 00:09:37.690 ************************************ 00:09:37.690 10:04:50 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:09:37.690 10:04:50 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1155610 00:09:37.690 10:04:50 -- event/cpu_locks.sh@99 -- # waitforlisten 1155610 /var/tmp/spdk.sock 00:09:37.690 10:04:50 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:37.690 10:04:50 -- common/autotest_common.sh@819 -- # '[' -z 1155610 ']' 00:09:37.690 10:04:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.690 10:04:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:37.690 10:04:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.690 10:04:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:37.690 10:04:50 -- common/autotest_common.sh@10 -- # set +x 00:09:37.690 [2024-04-24 10:04:50.889687] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:37.690 [2024-04-24 10:04:50.889773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155610 ] 00:09:37.690 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.690 [2024-04-24 10:04:50.966739] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:37.690 [2024-04-24 10:04:50.966776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.949 [2024-04-24 10:04:51.049903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:37.949 [2024-04-24 10:04:51.050045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.514 10:04:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:38.514 10:04:51 -- common/autotest_common.sh@852 -- # return 0 00:09:38.514 10:04:51 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1155785 00:09:38.514 10:04:51 -- event/cpu_locks.sh@103 -- # waitforlisten 1155785 /var/tmp/spdk2.sock 00:09:38.514 10:04:51 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:38.514 10:04:51 -- common/autotest_common.sh@819 -- # '[' -z 1155785 ']' 00:09:38.514 10:04:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:38.514 10:04:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.514 10:04:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:38.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:38.514 10:04:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.514 10:04:51 -- common/autotest_common.sh@10 -- # set +x 00:09:38.514 [2024-04-24 10:04:51.732941] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:38.514 [2024-04-24 10:04:51.733004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155785 ] 00:09:38.514 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.774 [2024-04-24 10:04:51.833197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.774 [2024-04-24 10:04:52.006906] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:38.774 [2024-04-24 10:04:52.007044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.387 10:04:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:39.387 10:04:52 -- common/autotest_common.sh@852 -- # return 0 00:09:39.387 10:04:52 -- event/cpu_locks.sh@105 -- # locks_exist 1155785 00:09:39.387 10:04:52 -- event/cpu_locks.sh@22 -- # lslocks -p 1155785 00:09:39.387 10:04:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:40.760 lslocks: write error 00:09:40.760 10:04:53 -- event/cpu_locks.sh@107 -- # killprocess 1155610 00:09:40.760 10:04:53 -- common/autotest_common.sh@926 -- # '[' -z 1155610 ']' 00:09:40.760 10:04:53 -- common/autotest_common.sh@930 -- # kill -0 1155610 00:09:40.760 10:04:53 -- common/autotest_common.sh@931 -- # uname 00:09:40.760 10:04:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:40.760 10:04:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1155610 00:09:40.760 10:04:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:40.760 10:04:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:40.760 10:04:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1155610' 00:09:40.760 killing process with pid 1155610 00:09:40.760 10:04:53 -- common/autotest_common.sh@945 -- # kill 1155610 00:09:40.760 10:04:53 -- common/autotest_common.sh@950 -- # wait 1155610 00:09:41.326 10:04:54 -- event/cpu_locks.sh@108 -- # killprocess 1155785 00:09:41.326 10:04:54 -- common/autotest_common.sh@926 -- # '[' -z 1155785 ']' 00:09:41.326 10:04:54 -- common/autotest_common.sh@930 -- # kill -0 1155785 00:09:41.326 10:04:54 -- common/autotest_common.sh@931 -- # uname 00:09:41.326 10:04:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:41.326 10:04:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1155785 00:09:41.326 10:04:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:41.326 10:04:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:41.326 10:04:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1155785' 00:09:41.326 killing process with pid 1155785 00:09:41.326 10:04:54 -- common/autotest_common.sh@945 -- # kill 1155785 00:09:41.326 10:04:54 -- common/autotest_common.sh@950 -- # wait 1155785 00:09:41.891 00:09:41.891 real 0m4.048s 00:09:41.891 user 0m4.268s 00:09:41.891 sys 0m1.402s 00:09:41.891 10:04:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.891 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:09:41.891 ************************************ 00:09:41.891 END TEST locking_app_on_unlocked_coremask 00:09:41.891 ************************************ 00:09:41.891 10:04:54 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:41.891 10:04:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:41.891 10:04:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:41.891 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:09:41.891 ************************************ 00:09:41.891 START TEST locking_app_on_locked_coremask 00:09:41.891 ************************************ 00:09:41.891 10:04:54 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:09:41.891 10:04:54 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1156188 00:09:41.891 10:04:54 -- event/cpu_locks.sh@116 -- # waitforlisten 1156188 /var/tmp/spdk.sock 00:09:41.891 10:04:54 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:41.891 10:04:54 -- common/autotest_common.sh@819 -- # '[' -z 1156188 ']' 00:09:41.891 10:04:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.891 10:04:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:41.891 10:04:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.891 10:04:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:41.891 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:09:41.891 [2024-04-24 10:04:54.989492] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:41.891 [2024-04-24 10:04:54.989587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156188 ] 00:09:41.891 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.891 [2024-04-24 10:04:55.065336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.891 [2024-04-24 10:04:55.150234] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:41.891 [2024-04-24 10:04:55.150349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.822 10:04:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:42.822 10:04:55 -- common/autotest_common.sh@852 -- # return 0 00:09:42.822 10:04:55 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1156370 00:09:42.822 10:04:55 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1156370 /var/tmp/spdk2.sock 00:09:42.822 10:04:55 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:42.822 10:04:55 -- common/autotest_common.sh@640 -- # local es=0 00:09:42.822 10:04:55 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1156370 /var/tmp/spdk2.sock 00:09:42.822 10:04:55 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:42.822 10:04:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:42.822 10:04:55 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:42.822 10:04:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:42.822 10:04:55 -- common/autotest_common.sh@643 -- # waitforlisten 1156370 /var/tmp/spdk2.sock 00:09:42.822 10:04:55 -- common/autotest_common.sh@819 -- # '[' -z 1156370 ']' 00:09:42.822 10:04:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:42.822 10:04:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:42.822 10:04:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:42.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:42.822 10:04:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:42.822 10:04:55 -- common/autotest_common.sh@10 -- # set +x 00:09:42.822 [2024-04-24 10:04:55.826252] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:42.822 [2024-04-24 10:04:55.826340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156370 ] 00:09:42.822 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.822 [2024-04-24 10:04:55.924169] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1156188 has claimed it. 00:09:42.822 [2024-04-24 10:04:55.924210] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:43.386 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1156370) - No such process 00:09:43.386 ERROR: process (pid: 1156370) is no longer running 00:09:43.386 10:04:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:43.386 10:04:56 -- common/autotest_common.sh@852 -- # return 1 00:09:43.386 10:04:56 -- common/autotest_common.sh@643 -- # es=1 00:09:43.386 10:04:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:43.386 10:04:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:43.386 10:04:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:43.386 10:04:56 -- event/cpu_locks.sh@122 -- # locks_exist 1156188 00:09:43.386 10:04:56 -- event/cpu_locks.sh@22 -- # lslocks -p 1156188 00:09:43.386 10:04:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:43.951 lslocks: write error 00:09:43.951 10:04:57 -- event/cpu_locks.sh@124 -- # killprocess 1156188 00:09:43.951 10:04:57 -- common/autotest_common.sh@926 -- # '[' -z 1156188 ']' 00:09:43.951 10:04:57 -- common/autotest_common.sh@930 -- # kill -0 1156188 00:09:43.951 10:04:57 -- common/autotest_common.sh@931 -- # uname 00:09:43.951 10:04:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:43.951 10:04:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1156188 00:09:43.951 10:04:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:43.951 10:04:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:43.951 10:04:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1156188' 00:09:43.951 killing process with pid 1156188 00:09:43.951 10:04:57 -- common/autotest_common.sh@945 -- # kill 1156188 00:09:43.951 10:04:57 -- common/autotest_common.sh@950 -- # wait 1156188 00:09:44.210 00:09:44.210 real 0m2.512s 00:09:44.210 user 0m2.698s 00:09:44.210 sys 0m0.774s 00:09:44.210 10:04:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.210 10:04:57 -- common/autotest_common.sh@10 -- # set +x 00:09:44.210 ************************************ 00:09:44.210 END TEST locking_app_on_locked_coremask 00:09:44.210 ************************************ 00:09:44.468 10:04:57 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:44.468 10:04:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:44.468 10:04:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:44.468 10:04:57 -- common/autotest_common.sh@10 -- # set +x 00:09:44.468 ************************************ 00:09:44.468 START TEST locking_overlapped_coremask 00:09:44.468 ************************************ 00:09:44.468 10:04:57 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:09:44.468 10:04:57 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1156580 00:09:44.468 10:04:57 -- event/cpu_locks.sh@133 -- # waitforlisten 1156580 /var/tmp/spdk.sock 00:09:44.468 10:04:57 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:44.468 10:04:57 -- common/autotest_common.sh@819 -- # '[' -z 1156580 ']' 00:09:44.468 10:04:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.468 10:04:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:44.468 10:04:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.468 10:04:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:44.468 10:04:57 -- common/autotest_common.sh@10 -- # set +x 00:09:44.468 [2024-04-24 10:04:57.552172] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:44.468 [2024-04-24 10:04:57.552250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156580 ] 00:09:44.468 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.468 [2024-04-24 10:04:57.628789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:44.468 [2024-04-24 10:04:57.721430] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:44.468 [2024-04-24 10:04:57.721581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.468 [2024-04-24 10:04:57.721677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.468 [2024-04-24 10:04:57.721680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.402 10:04:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:45.402 10:04:58 -- common/autotest_common.sh@852 -- # return 0 00:09:45.402 10:04:58 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:45.402 10:04:58 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1156758 00:09:45.402 10:04:58 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1156758 /var/tmp/spdk2.sock 00:09:45.402 10:04:58 -- common/autotest_common.sh@640 -- # local es=0 00:09:45.402 10:04:58 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1156758 /var/tmp/spdk2.sock 00:09:45.402 10:04:58 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:45.402 10:04:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:45.402 10:04:58 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:45.402 10:04:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:45.402 10:04:58 -- common/autotest_common.sh@643 -- # waitforlisten 1156758 /var/tmp/spdk2.sock 00:09:45.402 10:04:58 -- common/autotest_common.sh@819 -- # '[' -z 1156758 ']' 00:09:45.402 10:04:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:45.402 10:04:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:45.402 10:04:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:45.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:45.402 10:04:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:45.402 10:04:58 -- common/autotest_common.sh@10 -- # set +x 00:09:45.402 [2024-04-24 10:04:58.387858] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:45.402 [2024-04-24 10:04:58.387951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156758 ] 00:09:45.402 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.402 [2024-04-24 10:04:58.490459] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1156580 has claimed it. 00:09:45.402 [2024-04-24 10:04:58.490498] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:45.968 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1156758) - No such process 00:09:45.968 ERROR: process (pid: 1156758) is no longer running 00:09:45.968 10:04:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:45.968 10:04:59 -- common/autotest_common.sh@852 -- # return 1 00:09:45.968 10:04:59 -- common/autotest_common.sh@643 -- # es=1 00:09:45.968 10:04:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:45.968 10:04:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:45.968 10:04:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:45.968 10:04:59 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:45.968 10:04:59 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:45.968 10:04:59 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:45.968 10:04:59 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:45.968 10:04:59 -- event/cpu_locks.sh@141 -- # killprocess 1156580 00:09:45.968 10:04:59 -- common/autotest_common.sh@926 -- # '[' -z 1156580 ']' 00:09:45.968 10:04:59 -- common/autotest_common.sh@930 -- # kill -0 1156580 00:09:45.968 10:04:59 -- common/autotest_common.sh@931 -- # uname 00:09:45.968 10:04:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:45.968 10:04:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1156580 00:09:45.968 10:04:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:45.968 10:04:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:45.968 10:04:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1156580' 00:09:45.968 killing process with pid 1156580 00:09:45.968 10:04:59 -- common/autotest_common.sh@945 -- # kill 1156580 00:09:45.968 10:04:59 -- common/autotest_common.sh@950 -- # wait 1156580 00:09:46.227 00:09:46.227 real 0m1.887s 00:09:46.227 user 0m5.192s 00:09:46.227 sys 0m0.479s 00:09:46.227 10:04:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.227 10:04:59 -- common/autotest_common.sh@10 -- # set +x 00:09:46.227 ************************************ 00:09:46.227 END TEST locking_overlapped_coremask 00:09:46.227 ************************************ 00:09:46.227 10:04:59 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:46.227 10:04:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:46.227 10:04:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:46.227 10:04:59 -- common/autotest_common.sh@10 -- # set +x 00:09:46.227 ************************************ 00:09:46.227 START TEST locking_overlapped_coremask_via_rpc 00:09:46.227 ************************************ 00:09:46.227 10:04:59 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:09:46.227 10:04:59 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1156968 00:09:46.227 10:04:59 -- event/cpu_locks.sh@149 -- # waitforlisten 1156968 /var/tmp/spdk.sock 00:09:46.227 10:04:59 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:46.227 10:04:59 -- common/autotest_common.sh@819 -- # '[' -z 1156968 ']' 00:09:46.227 10:04:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.227 10:04:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:46.227 10:04:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.227 10:04:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:46.227 10:04:59 -- common/autotest_common.sh@10 -- # set +x 00:09:46.227 [2024-04-24 10:04:59.490587] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:46.227 [2024-04-24 10:04:59.490664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156968 ] 00:09:46.486 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.486 [2024-04-24 10:04:59.566290] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:46.486 [2024-04-24 10:04:59.566321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:46.486 [2024-04-24 10:04:59.648930] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:46.486 [2024-04-24 10:04:59.649122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.486 [2024-04-24 10:04:59.649154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.486 [2024-04-24 10:04:59.649157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.052 10:05:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:47.052 10:05:00 -- common/autotest_common.sh@852 -- # return 0 00:09:47.052 10:05:00 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1156997 00:09:47.052 10:05:00 -- event/cpu_locks.sh@153 -- # waitforlisten 1156997 /var/tmp/spdk2.sock 00:09:47.052 10:05:00 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:47.052 10:05:00 -- common/autotest_common.sh@819 -- # '[' -z 1156997 ']' 00:09:47.052 10:05:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:47.052 10:05:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:47.052 10:05:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:47.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:47.052 10:05:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:47.052 10:05:00 -- common/autotest_common.sh@10 -- # set +x 00:09:47.310 [2024-04-24 10:05:00.334516] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:47.310 [2024-04-24 10:05:00.334607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156997 ] 00:09:47.310 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.310 [2024-04-24 10:05:00.439370] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:47.310 [2024-04-24 10:05:00.439404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.567 [2024-04-24 10:05:00.615259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:47.567 [2024-04-24 10:05:00.615521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.567 [2024-04-24 10:05:00.619111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.567 [2024-04-24 10:05:00.619112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:48.203 10:05:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:48.203 10:05:01 -- common/autotest_common.sh@852 -- # return 0 00:09:48.203 10:05:01 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:48.203 10:05:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:48.203 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:09:48.203 10:05:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:48.203 10:05:01 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:48.203 10:05:01 -- common/autotest_common.sh@640 -- # local es=0 00:09:48.203 10:05:01 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:48.203 10:05:01 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:09:48.203 10:05:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:48.203 10:05:01 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:09:48.203 10:05:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:48.203 10:05:01 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:48.203 10:05:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:48.203 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:09:48.203 [2024-04-24 10:05:01.207118] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1156968 has claimed it. 00:09:48.203 request: 00:09:48.203 { 00:09:48.203 "method": "framework_enable_cpumask_locks", 00:09:48.203 "req_id": 1 00:09:48.203 } 00:09:48.203 Got JSON-RPC error response 00:09:48.203 response: 00:09:48.203 { 00:09:48.203 "code": -32603, 00:09:48.203 "message": "Failed to claim CPU core: 2" 00:09:48.203 } 00:09:48.203 10:05:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:09:48.203 10:05:01 -- common/autotest_common.sh@643 -- # es=1 00:09:48.203 10:05:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:48.203 10:05:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:48.203 10:05:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:48.204 10:05:01 -- event/cpu_locks.sh@158 -- # waitforlisten 1156968 /var/tmp/spdk.sock 00:09:48.204 10:05:01 -- common/autotest_common.sh@819 -- # '[' -z 1156968 ']' 00:09:48.204 10:05:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.204 10:05:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:48.204 10:05:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.204 10:05:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:48.204 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:09:48.204 10:05:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:48.204 10:05:01 -- common/autotest_common.sh@852 -- # return 0 00:09:48.204 10:05:01 -- event/cpu_locks.sh@159 -- # waitforlisten 1156997 /var/tmp/spdk2.sock 00:09:48.204 10:05:01 -- common/autotest_common.sh@819 -- # '[' -z 1156997 ']' 00:09:48.204 10:05:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:48.204 10:05:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:48.204 10:05:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:48.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:48.204 10:05:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:48.204 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:09:48.465 10:05:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:48.465 10:05:01 -- common/autotest_common.sh@852 -- # return 0 00:09:48.465 10:05:01 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:48.465 10:05:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:48.465 10:05:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:48.465 10:05:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:48.465 00:09:48.465 real 0m2.115s 00:09:48.465 user 0m0.816s 00:09:48.465 sys 0m0.224s 00:09:48.465 10:05:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.465 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:09:48.465 ************************************ 00:09:48.465 END TEST locking_overlapped_coremask_via_rpc 00:09:48.465 ************************************ 00:09:48.465 10:05:01 -- event/cpu_locks.sh@174 -- # cleanup 00:09:48.465 10:05:01 -- event/cpu_locks.sh@15 -- # [[ -z 1156968 ]] 00:09:48.465 10:05:01 -- event/cpu_locks.sh@15 -- # killprocess 1156968 00:09:48.465 10:05:01 -- common/autotest_common.sh@926 -- # '[' -z 1156968 ']' 00:09:48.465 10:05:01 -- common/autotest_common.sh@930 -- # kill -0 1156968 00:09:48.465 10:05:01 -- common/autotest_common.sh@931 -- # uname 00:09:48.465 10:05:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:48.465 10:05:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1156968 00:09:48.465 10:05:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:48.465 10:05:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:48.465 10:05:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1156968' 00:09:48.465 killing process with pid 1156968 00:09:48.465 10:05:01 -- common/autotest_common.sh@945 -- # kill 1156968 00:09:48.465 10:05:01 -- common/autotest_common.sh@950 -- # wait 1156968 00:09:49.029 10:05:02 -- event/cpu_locks.sh@16 -- # [[ -z 1156997 ]] 00:09:49.029 10:05:02 -- event/cpu_locks.sh@16 -- # killprocess 1156997 00:09:49.029 10:05:02 -- common/autotest_common.sh@926 -- # '[' -z 1156997 ']' 00:09:49.029 10:05:02 -- common/autotest_common.sh@930 -- # kill -0 1156997 00:09:49.029 10:05:02 -- common/autotest_common.sh@931 -- # uname 00:09:49.029 10:05:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:49.029 10:05:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1156997 00:09:49.029 10:05:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:49.029 10:05:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:49.029 10:05:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1156997' 00:09:49.029 killing process with pid 1156997 00:09:49.029 10:05:02 -- common/autotest_common.sh@945 -- # kill 1156997 00:09:49.029 10:05:02 -- common/autotest_common.sh@950 -- # wait 1156997 00:09:49.287 10:05:02 -- event/cpu_locks.sh@18 -- # rm -f 00:09:49.287 10:05:02 -- event/cpu_locks.sh@1 -- # cleanup 00:09:49.287 10:05:02 -- event/cpu_locks.sh@15 -- # [[ -z 1156968 ]] 00:09:49.287 10:05:02 -- event/cpu_locks.sh@15 -- # killprocess 1156968 00:09:49.287 10:05:02 -- common/autotest_common.sh@926 -- # '[' -z 1156968 ']' 00:09:49.287 10:05:02 -- common/autotest_common.sh@930 -- # kill -0 1156968 00:09:49.287 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1156968) - No such process 00:09:49.287 10:05:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1156968 is not found' 00:09:49.287 Process with pid 1156968 is not found 00:09:49.287 10:05:02 -- event/cpu_locks.sh@16 -- # [[ -z 1156997 ]] 00:09:49.287 10:05:02 -- event/cpu_locks.sh@16 -- # killprocess 1156997 00:09:49.287 10:05:02 -- common/autotest_common.sh@926 -- # '[' -z 1156997 ']' 00:09:49.287 10:05:02 -- common/autotest_common.sh@930 -- # kill -0 1156997 00:09:49.287 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1156997) - No such process 00:09:49.287 10:05:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1156997 is not found' 00:09:49.287 Process with pid 1156997 is not found 00:09:49.287 10:05:02 -- event/cpu_locks.sh@18 -- # rm -f 00:09:49.287 00:09:49.287 real 0m18.909s 00:09:49.287 user 0m31.035s 00:09:49.287 sys 0m6.273s 00:09:49.287 10:05:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.287 10:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:49.287 ************************************ 00:09:49.287 END TEST cpu_locks 00:09:49.287 ************************************ 00:09:49.287 00:09:49.287 real 0m44.362s 00:09:49.287 user 1m22.350s 00:09:49.287 sys 0m10.424s 00:09:49.287 10:05:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.287 10:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:49.287 ************************************ 00:09:49.287 END TEST event 00:09:49.287 ************************************ 00:09:49.287 10:05:02 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:09:49.287 10:05:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:49.287 10:05:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.287 10:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:49.287 ************************************ 00:09:49.287 START TEST thread 00:09:49.287 ************************************ 00:09:49.287 10:05:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:09:49.545 * Looking for test storage... 00:09:49.545 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:09:49.545 10:05:02 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:49.545 10:05:02 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:49.545 10:05:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.545 10:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:49.545 ************************************ 00:09:49.545 START TEST thread_poller_perf 00:09:49.545 ************************************ 00:09:49.545 10:05:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:49.545 [2024-04-24 10:05:02.633077] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:49.545 [2024-04-24 10:05:02.633174] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157432 ] 00:09:49.545 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.545 [2024-04-24 10:05:02.711765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.545 [2024-04-24 10:05:02.790187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.545 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:50.920 ====================================== 00:09:50.920 busy:2305147114 (cyc) 00:09:50.920 total_run_count: 777000 00:09:50.920 tsc_hz: 2300000000 (cyc) 00:09:50.920 ====================================== 00:09:50.920 poller_cost: 2966 (cyc), 1289 (nsec) 00:09:50.920 00:09:50.920 real 0m1.253s 00:09:50.920 user 0m1.155s 00:09:50.920 sys 0m0.094s 00:09:50.920 10:05:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.920 10:05:03 -- common/autotest_common.sh@10 -- # set +x 00:09:50.920 ************************************ 00:09:50.920 END TEST thread_poller_perf 00:09:50.920 ************************************ 00:09:50.920 10:05:03 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:50.920 10:05:03 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:50.920 10:05:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:50.920 10:05:03 -- common/autotest_common.sh@10 -- # set +x 00:09:50.920 ************************************ 00:09:50.920 START TEST thread_poller_perf 00:09:50.920 ************************************ 00:09:50.920 10:05:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:50.920 [2024-04-24 10:05:03.935532] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:50.920 [2024-04-24 10:05:03.935651] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157630 ] 00:09:50.920 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.920 [2024-04-24 10:05:04.012490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.920 [2024-04-24 10:05:04.090280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.920 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:52.294 ====================================== 00:09:52.294 busy:2301768850 (cyc) 00:09:52.294 total_run_count: 13738000 00:09:52.294 tsc_hz: 2300000000 (cyc) 00:09:52.294 ====================================== 00:09:52.294 poller_cost: 167 (cyc), 72 (nsec) 00:09:52.294 00:09:52.294 real 0m1.245s 00:09:52.294 user 0m1.144s 00:09:52.294 sys 0m0.095s 00:09:52.294 10:05:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.294 10:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:52.294 ************************************ 00:09:52.294 END TEST thread_poller_perf 00:09:52.294 ************************************ 00:09:52.294 10:05:05 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:52.294 10:05:05 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:09:52.294 10:05:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:52.294 10:05:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:52.294 10:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:52.294 ************************************ 00:09:52.294 START TEST thread_spdk_lock 00:09:52.294 ************************************ 00:09:52.294 10:05:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:09:52.294 [2024-04-24 10:05:05.224868] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:52.294 [2024-04-24 10:05:05.224985] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157826 ] 00:09:52.294 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.294 [2024-04-24 10:05:05.301340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:52.294 [2024-04-24 10:05:05.379332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.294 [2024-04-24 10:05:05.379334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.861 [2024-04-24 10:05:05.861289] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:52.861 [2024-04-24 10:05:05.861327] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:52.861 [2024-04-24 10:05:05.861338] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x149c080 00:09:52.861 [2024-04-24 10:05:05.862237] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:52.861 [2024-04-24 10:05:05.862340] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:52.861 [2024-04-24 10:05:05.862358] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:52.861 Starting test contend 00:09:52.861 Worker Delay Wait us Hold us Total us 00:09:52.861 0 3 168508 182276 350784 00:09:52.861 1 5 84664 282204 366869 00:09:52.861 PASS test contend 00:09:52.861 Starting test hold_by_poller 00:09:52.861 PASS test hold_by_poller 00:09:52.861 Starting test hold_by_message 00:09:52.861 PASS test hold_by_message 00:09:52.861 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:09:52.861 100014 assertions passed 00:09:52.861 0 assertions failed 00:09:52.861 00:09:52.861 real 0m0.727s 00:09:52.861 user 0m1.113s 00:09:52.861 sys 0m0.093s 00:09:52.861 10:05:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.861 10:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:52.861 ************************************ 00:09:52.861 END TEST thread_spdk_lock 00:09:52.861 ************************************ 00:09:52.861 00:09:52.861 real 0m3.467s 00:09:52.861 user 0m3.492s 00:09:52.861 sys 0m0.478s 00:09:52.861 10:05:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.861 10:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:52.861 ************************************ 00:09:52.861 END TEST thread 00:09:52.861 ************************************ 00:09:52.861 10:05:06 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:09:52.861 10:05:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:52.861 10:05:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:52.861 10:05:06 -- common/autotest_common.sh@10 -- # set +x 00:09:52.861 ************************************ 00:09:52.861 START TEST accel 00:09:52.861 ************************************ 00:09:52.861 10:05:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:09:52.861 * Looking for test storage... 00:09:52.861 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:09:52.861 10:05:06 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:09:52.861 10:05:06 -- accel/accel.sh@74 -- # get_expected_opcs 00:09:52.861 10:05:06 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:52.862 10:05:06 -- accel/accel.sh@59 -- # spdk_tgt_pid=1158055 00:09:52.862 10:05:06 -- accel/accel.sh@60 -- # waitforlisten 1158055 00:09:52.862 10:05:06 -- common/autotest_common.sh@819 -- # '[' -z 1158055 ']' 00:09:52.862 10:05:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.862 10:05:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:52.862 10:05:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.862 10:05:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:52.862 10:05:06 -- common/autotest_common.sh@10 -- # set +x 00:09:52.862 10:05:06 -- accel/accel.sh@58 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:52.862 10:05:06 -- accel/accel.sh@58 -- # build_accel_config 00:09:52.862 10:05:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:52.862 10:05:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:52.862 10:05:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:52.862 10:05:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:52.862 10:05:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:52.862 10:05:06 -- accel/accel.sh@41 -- # local IFS=, 00:09:52.862 10:05:06 -- accel/accel.sh@42 -- # jq -r . 00:09:53.120 [2024-04-24 10:05:06.143506] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:53.120 [2024-04-24 10:05:06.143586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158055 ] 00:09:53.120 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.120 [2024-04-24 10:05:06.217360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.120 [2024-04-24 10:05:06.305451] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:53.120 [2024-04-24 10:05:06.305569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.687 10:05:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:53.687 10:05:06 -- common/autotest_common.sh@852 -- # return 0 00:09:53.687 10:05:06 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:53.687 10:05:06 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:09:53.687 10:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.687 10:05:06 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:53.687 10:05:06 -- common/autotest_common.sh@10 -- # set +x 00:09:53.945 10:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.945 10:05:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.945 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.945 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.945 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.945 10:05:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.945 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.945 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.945 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.945 10:05:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.945 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.945 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.946 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.946 10:05:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.946 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.946 10:05:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.946 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.946 10:05:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.946 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.946 10:05:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.946 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.946 10:05:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.946 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.946 10:05:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.946 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.946 10:05:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.946 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.946 10:05:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.946 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.946 10:05:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.946 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.946 10:05:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.946 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.946 10:05:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # IFS== 00:09:53.946 10:05:07 -- accel/accel.sh@64 -- # read -r opc module 00:09:53.946 10:05:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:53.946 10:05:07 -- accel/accel.sh@67 -- # killprocess 1158055 00:09:53.946 10:05:07 -- common/autotest_common.sh@926 -- # '[' -z 1158055 ']' 00:09:53.946 10:05:07 -- common/autotest_common.sh@930 -- # kill -0 1158055 00:09:53.946 10:05:07 -- common/autotest_common.sh@931 -- # uname 00:09:53.946 10:05:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:53.946 10:05:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1158055 00:09:53.946 10:05:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:53.946 10:05:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:53.946 10:05:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1158055' 00:09:53.946 killing process with pid 1158055 00:09:53.946 10:05:07 -- common/autotest_common.sh@945 -- # kill 1158055 00:09:53.946 10:05:07 -- common/autotest_common.sh@950 -- # wait 1158055 00:09:54.204 10:05:07 -- accel/accel.sh@68 -- # trap - ERR 00:09:54.204 10:05:07 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:09:54.204 10:05:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:54.204 10:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:54.204 10:05:07 -- common/autotest_common.sh@10 -- # set +x 00:09:54.204 10:05:07 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:09:54.204 10:05:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:54.204 10:05:07 -- accel/accel.sh@12 -- # build_accel_config 00:09:54.204 10:05:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:54.204 10:05:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.204 10:05:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.204 10:05:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:54.204 10:05:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:54.204 10:05:07 -- accel/accel.sh@41 -- # local IFS=, 00:09:54.204 10:05:07 -- accel/accel.sh@42 -- # jq -r . 00:09:54.204 10:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.204 10:05:07 -- common/autotest_common.sh@10 -- # set +x 00:09:54.204 10:05:07 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:54.205 10:05:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:54.205 10:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:54.205 10:05:07 -- common/autotest_common.sh@10 -- # set +x 00:09:54.205 ************************************ 00:09:54.205 START TEST accel_missing_filename 00:09:54.205 ************************************ 00:09:54.205 10:05:07 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:09:54.205 10:05:07 -- common/autotest_common.sh@640 -- # local es=0 00:09:54.205 10:05:07 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:54.205 10:05:07 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:54.205 10:05:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:54.205 10:05:07 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:54.463 10:05:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:54.463 10:05:07 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:09:54.463 10:05:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:54.463 10:05:07 -- accel/accel.sh@12 -- # build_accel_config 00:09:54.463 10:05:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:54.463 10:05:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.463 10:05:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.463 10:05:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:54.463 10:05:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:54.463 10:05:07 -- accel/accel.sh@41 -- # local IFS=, 00:09:54.463 10:05:07 -- accel/accel.sh@42 -- # jq -r . 00:09:54.463 [2024-04-24 10:05:07.501509] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:54.463 [2024-04-24 10:05:07.501603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158273 ] 00:09:54.463 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.463 [2024-04-24 10:05:07.577000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.463 [2024-04-24 10:05:07.654807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.463 [2024-04-24 10:05:07.694436] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:54.722 [2024-04-24 10:05:07.756604] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:54.722 A filename is required. 00:09:54.722 10:05:07 -- common/autotest_common.sh@643 -- # es=234 00:09:54.722 10:05:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:54.722 10:05:07 -- common/autotest_common.sh@652 -- # es=106 00:09:54.722 10:05:07 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:54.722 10:05:07 -- common/autotest_common.sh@660 -- # es=1 00:09:54.722 10:05:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:54.722 00:09:54.722 real 0m0.350s 00:09:54.722 user 0m0.248s 00:09:54.722 sys 0m0.139s 00:09:54.722 10:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.722 10:05:07 -- common/autotest_common.sh@10 -- # set +x 00:09:54.722 ************************************ 00:09:54.722 END TEST accel_missing_filename 00:09:54.722 ************************************ 00:09:54.722 10:05:07 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:09:54.722 10:05:07 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:09:54.722 10:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:54.722 10:05:07 -- common/autotest_common.sh@10 -- # set +x 00:09:54.722 ************************************ 00:09:54.722 START TEST accel_compress_verify 00:09:54.722 ************************************ 00:09:54.722 10:05:07 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:09:54.722 10:05:07 -- common/autotest_common.sh@640 -- # local es=0 00:09:54.722 10:05:07 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:09:54.722 10:05:07 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:54.722 10:05:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:54.722 10:05:07 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:54.722 10:05:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:54.722 10:05:07 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:09:54.722 10:05:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:09:54.722 10:05:07 -- accel/accel.sh@12 -- # build_accel_config 00:09:54.722 10:05:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:54.722 10:05:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.722 10:05:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.722 10:05:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:54.723 10:05:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:54.723 10:05:07 -- accel/accel.sh@41 -- # local IFS=, 00:09:54.723 10:05:07 -- accel/accel.sh@42 -- # jq -r . 00:09:54.723 [2024-04-24 10:05:07.893417] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:54.723 [2024-04-24 10:05:07.893511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158300 ] 00:09:54.723 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.723 [2024-04-24 10:05:07.968988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.981 [2024-04-24 10:05:08.050769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.981 [2024-04-24 10:05:08.096778] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:54.981 [2024-04-24 10:05:08.165703] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:54.981 00:09:54.981 Compression does not support the verify option, aborting. 00:09:54.981 10:05:08 -- common/autotest_common.sh@643 -- # es=161 00:09:54.981 10:05:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:54.981 10:05:08 -- common/autotest_common.sh@652 -- # es=33 00:09:54.981 10:05:08 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:54.981 10:05:08 -- common/autotest_common.sh@660 -- # es=1 00:09:54.981 10:05:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:54.981 00:09:54.981 real 0m0.368s 00:09:54.981 user 0m0.259s 00:09:54.981 sys 0m0.146s 00:09:54.981 10:05:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.981 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:09:54.981 ************************************ 00:09:54.981 END TEST accel_compress_verify 00:09:54.981 ************************************ 00:09:55.240 10:05:08 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:55.240 10:05:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:55.240 10:05:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.240 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:09:55.240 ************************************ 00:09:55.240 START TEST accel_wrong_workload 00:09:55.240 ************************************ 00:09:55.240 10:05:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:09:55.240 10:05:08 -- common/autotest_common.sh@640 -- # local es=0 00:09:55.240 10:05:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:55.240 10:05:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:55.240 10:05:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:55.240 10:05:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:55.240 10:05:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:55.240 10:05:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:09:55.240 10:05:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:55.240 10:05:08 -- accel/accel.sh@12 -- # build_accel_config 00:09:55.240 10:05:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:55.240 10:05:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.240 10:05:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.240 10:05:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:55.240 10:05:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:55.240 10:05:08 -- accel/accel.sh@41 -- # local IFS=, 00:09:55.240 10:05:08 -- accel/accel.sh@42 -- # jq -r . 00:09:55.240 Unsupported workload type: foobar 00:09:55.240 [2024-04-24 10:05:08.296182] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:55.240 accel_perf options: 00:09:55.240 [-h help message] 00:09:55.240 [-q queue depth per core] 00:09:55.240 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:55.240 [-T number of threads per core 00:09:55.240 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:55.240 [-t time in seconds] 00:09:55.240 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:55.240 [ dif_verify, , dif_generate, dif_generate_copy 00:09:55.240 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:55.240 [-l for compress/decompress workloads, name of uncompressed input file 00:09:55.240 [-S for crc32c workload, use this seed value (default 0) 00:09:55.240 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:55.240 [-f for fill workload, use this BYTE value (default 255) 00:09:55.240 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:55.240 [-y verify result if this switch is on] 00:09:55.240 [-a tasks to allocate per core (default: same value as -q)] 00:09:55.240 Can be used to spread operations across a wider range of memory. 00:09:55.240 10:05:08 -- common/autotest_common.sh@643 -- # es=1 00:09:55.240 10:05:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:55.240 10:05:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:55.240 10:05:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:55.240 00:09:55.240 real 0m0.024s 00:09:55.240 user 0m0.009s 00:09:55.240 sys 0m0.015s 00:09:55.240 10:05:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.240 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:09:55.240 ************************************ 00:09:55.240 END TEST accel_wrong_workload 00:09:55.240 ************************************ 00:09:55.240 Error: writing output failed: Broken pipe 00:09:55.240 10:05:08 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:55.240 10:05:08 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:09:55.240 10:05:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.240 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:09:55.240 ************************************ 00:09:55.240 START TEST accel_negative_buffers 00:09:55.240 ************************************ 00:09:55.240 10:05:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:55.240 10:05:08 -- common/autotest_common.sh@640 -- # local es=0 00:09:55.240 10:05:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:55.240 10:05:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:55.240 10:05:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:55.240 10:05:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:55.240 10:05:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:55.240 10:05:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:09:55.240 10:05:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:55.240 10:05:08 -- accel/accel.sh@12 -- # build_accel_config 00:09:55.240 10:05:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:55.240 10:05:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.240 10:05:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.240 10:05:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:55.240 10:05:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:55.240 10:05:08 -- accel/accel.sh@41 -- # local IFS=, 00:09:55.240 10:05:08 -- accel/accel.sh@42 -- # jq -r . 00:09:55.240 -x option must be non-negative. 00:09:55.240 [2024-04-24 10:05:08.369927] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:55.240 accel_perf options: 00:09:55.240 [-h help message] 00:09:55.240 [-q queue depth per core] 00:09:55.240 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:55.240 [-T number of threads per core 00:09:55.240 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:55.240 [-t time in seconds] 00:09:55.241 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:55.241 [ dif_verify, , dif_generate, dif_generate_copy 00:09:55.241 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:55.241 [-l for compress/decompress workloads, name of uncompressed input file 00:09:55.241 [-S for crc32c workload, use this seed value (default 0) 00:09:55.241 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:55.241 [-f for fill workload, use this BYTE value (default 255) 00:09:55.241 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:55.241 [-y verify result if this switch is on] 00:09:55.241 [-a tasks to allocate per core (default: same value as -q)] 00:09:55.241 Can be used to spread operations across a wider range of memory. 00:09:55.241 10:05:08 -- common/autotest_common.sh@643 -- # es=1 00:09:55.241 10:05:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:55.241 10:05:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:55.241 10:05:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:55.241 00:09:55.241 real 0m0.029s 00:09:55.241 user 0m0.015s 00:09:55.241 sys 0m0.013s 00:09:55.241 10:05:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.241 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:09:55.241 ************************************ 00:09:55.241 END TEST accel_negative_buffers 00:09:55.241 ************************************ 00:09:55.241 Error: writing output failed: Broken pipe 00:09:55.241 10:05:08 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:55.241 10:05:08 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:55.241 10:05:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.241 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:09:55.241 ************************************ 00:09:55.241 START TEST accel_crc32c 00:09:55.241 ************************************ 00:09:55.241 10:05:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:55.241 10:05:08 -- accel/accel.sh@16 -- # local accel_opc 00:09:55.241 10:05:08 -- accel/accel.sh@17 -- # local accel_module 00:09:55.241 10:05:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:55.241 10:05:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:55.241 10:05:08 -- accel/accel.sh@12 -- # build_accel_config 00:09:55.241 10:05:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:55.241 10:05:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.241 10:05:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.241 10:05:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:55.241 10:05:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:55.241 10:05:08 -- accel/accel.sh@41 -- # local IFS=, 00:09:55.241 10:05:08 -- accel/accel.sh@42 -- # jq -r . 00:09:55.241 [2024-04-24 10:05:08.441838] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:55.241 [2024-04-24 10:05:08.441927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158406 ] 00:09:55.241 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.500 [2024-04-24 10:05:08.520261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.500 [2024-04-24 10:05:08.611318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.875 10:05:09 -- accel/accel.sh@18 -- # out=' 00:09:56.875 SPDK Configuration: 00:09:56.875 Core mask: 0x1 00:09:56.875 00:09:56.875 Accel Perf Configuration: 00:09:56.875 Workload Type: crc32c 00:09:56.875 CRC-32C seed: 32 00:09:56.875 Transfer size: 4096 bytes 00:09:56.875 Vector count 1 00:09:56.875 Module: software 00:09:56.875 Queue depth: 32 00:09:56.875 Allocate depth: 32 00:09:56.875 # threads/core: 1 00:09:56.875 Run time: 1 seconds 00:09:56.875 Verify: Yes 00:09:56.875 00:09:56.875 Running for 1 seconds... 00:09:56.875 00:09:56.875 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:56.875 ------------------------------------------------------------------------------------ 00:09:56.875 0,0 826496/s 3228 MiB/s 0 0 00:09:56.875 ==================================================================================== 00:09:56.875 Total 826496/s 3228 MiB/s 0 0' 00:09:56.875 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.875 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.875 10:05:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:56.875 10:05:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:56.876 10:05:09 -- accel/accel.sh@12 -- # build_accel_config 00:09:56.876 10:05:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:56.876 10:05:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.876 10:05:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.876 10:05:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:56.876 10:05:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:56.876 10:05:09 -- accel/accel.sh@41 -- # local IFS=, 00:09:56.876 10:05:09 -- accel/accel.sh@42 -- # jq -r . 00:09:56.876 [2024-04-24 10:05:09.827188] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:56.876 [2024-04-24 10:05:09.827300] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158627 ] 00:09:56.876 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.876 [2024-04-24 10:05:09.903972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.876 [2024-04-24 10:05:09.985229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val= 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val= 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val=0x1 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val= 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val= 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val=crc32c 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val=32 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val= 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val=software 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@23 -- # accel_module=software 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val=32 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val=32 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val=1 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val=Yes 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val= 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:56.876 10:05:10 -- accel/accel.sh@21 -- # val= 00:09:56.876 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:56.876 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:58.251 10:05:11 -- accel/accel.sh@21 -- # val= 00:09:58.251 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.251 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:09:58.251 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:09:58.251 10:05:11 -- accel/accel.sh@21 -- # val= 00:09:58.251 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.251 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:09:58.251 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:09:58.251 10:05:11 -- accel/accel.sh@21 -- # val= 00:09:58.251 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.251 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:09:58.251 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:09:58.251 10:05:11 -- accel/accel.sh@21 -- # val= 00:09:58.251 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.251 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:09:58.251 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:09:58.251 10:05:11 -- accel/accel.sh@21 -- # val= 00:09:58.251 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.251 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:09:58.251 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:09:58.251 10:05:11 -- accel/accel.sh@21 -- # val= 00:09:58.251 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.251 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:09:58.251 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:09:58.251 10:05:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:58.251 10:05:11 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:58.251 10:05:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:58.251 00:09:58.251 real 0m2.757s 00:09:58.251 user 0m2.469s 00:09:58.251 sys 0m0.285s 00:09:58.251 10:05:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.251 10:05:11 -- common/autotest_common.sh@10 -- # set +x 00:09:58.251 ************************************ 00:09:58.251 END TEST accel_crc32c 00:09:58.251 ************************************ 00:09:58.251 10:05:11 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:58.251 10:05:11 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:58.251 10:05:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.251 10:05:11 -- common/autotest_common.sh@10 -- # set +x 00:09:58.251 ************************************ 00:09:58.251 START TEST accel_crc32c_C2 00:09:58.251 ************************************ 00:09:58.251 10:05:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:58.251 10:05:11 -- accel/accel.sh@16 -- # local accel_opc 00:09:58.251 10:05:11 -- accel/accel.sh@17 -- # local accel_module 00:09:58.251 10:05:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:58.251 10:05:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:58.251 10:05:11 -- accel/accel.sh@12 -- # build_accel_config 00:09:58.251 10:05:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:58.251 10:05:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.251 10:05:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.251 10:05:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:58.251 10:05:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:58.251 10:05:11 -- accel/accel.sh@41 -- # local IFS=, 00:09:58.251 10:05:11 -- accel/accel.sh@42 -- # jq -r . 00:09:58.251 [2024-04-24 10:05:11.232441] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:58.251 [2024-04-24 10:05:11.232512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158855 ] 00:09:58.251 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.251 [2024-04-24 10:05:11.306117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.251 [2024-04-24 10:05:11.383842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.627 10:05:12 -- accel/accel.sh@18 -- # out=' 00:09:59.627 SPDK Configuration: 00:09:59.627 Core mask: 0x1 00:09:59.627 00:09:59.627 Accel Perf Configuration: 00:09:59.627 Workload Type: crc32c 00:09:59.627 CRC-32C seed: 0 00:09:59.627 Transfer size: 4096 bytes 00:09:59.627 Vector count 2 00:09:59.627 Module: software 00:09:59.627 Queue depth: 32 00:09:59.627 Allocate depth: 32 00:09:59.627 # threads/core: 1 00:09:59.627 Run time: 1 seconds 00:09:59.627 Verify: Yes 00:09:59.627 00:09:59.627 Running for 1 seconds... 00:09:59.627 00:09:59.627 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:59.627 ------------------------------------------------------------------------------------ 00:09:59.627 0,0 604512/s 4722 MiB/s 0 0 00:09:59.627 ==================================================================================== 00:09:59.627 Total 604512/s 2361 MiB/s 0 0' 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:59.627 10:05:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:59.627 10:05:12 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.627 10:05:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:59.627 10:05:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.627 10:05:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.627 10:05:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:59.627 10:05:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:59.627 10:05:12 -- accel/accel.sh@41 -- # local IFS=, 00:09:59.627 10:05:12 -- accel/accel.sh@42 -- # jq -r . 00:09:59.627 [2024-04-24 10:05:12.583945] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:59.627 [2024-04-24 10:05:12.584038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159035 ] 00:09:59.627 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.627 [2024-04-24 10:05:12.659970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.627 [2024-04-24 10:05:12.741601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val=0x1 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val=crc32c 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val=0 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val=software 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@23 -- # accel_module=software 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val=32 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val=32 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val=1 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val=Yes 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.627 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.627 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.627 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:10:01.005 10:05:13 -- accel/accel.sh@21 -- # val= 00:10:01.005 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.005 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:10:01.005 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:10:01.005 10:05:13 -- accel/accel.sh@21 -- # val= 00:10:01.005 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.005 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:10:01.005 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:10:01.005 10:05:13 -- accel/accel.sh@21 -- # val= 00:10:01.005 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.005 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:10:01.005 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:10:01.005 10:05:13 -- accel/accel.sh@21 -- # val= 00:10:01.005 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.005 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:10:01.005 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:10:01.005 10:05:13 -- accel/accel.sh@21 -- # val= 00:10:01.005 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.005 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:10:01.005 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:10:01.005 10:05:13 -- accel/accel.sh@21 -- # val= 00:10:01.005 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.005 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:10:01.005 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:10:01.005 10:05:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:01.005 10:05:13 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:01.005 10:05:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:01.005 00:10:01.005 real 0m2.712s 00:10:01.005 user 0m2.434s 00:10:01.005 sys 0m0.277s 00:10:01.005 10:05:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.005 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:10:01.005 ************************************ 00:10:01.005 END TEST accel_crc32c_C2 00:10:01.005 ************************************ 00:10:01.005 10:05:13 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:01.005 10:05:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:01.005 10:05:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:01.005 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:10:01.005 ************************************ 00:10:01.005 START TEST accel_copy 00:10:01.005 ************************************ 00:10:01.005 10:05:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:01.005 10:05:13 -- accel/accel.sh@16 -- # local accel_opc 00:10:01.005 10:05:13 -- accel/accel.sh@17 -- # local accel_module 00:10:01.005 10:05:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:01.005 10:05:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:01.005 10:05:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:01.005 10:05:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:01.005 10:05:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:01.005 10:05:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.005 10:05:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:01.005 10:05:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:01.005 10:05:13 -- accel/accel.sh@41 -- # local IFS=, 00:10:01.005 10:05:13 -- accel/accel.sh@42 -- # jq -r . 00:10:01.005 [2024-04-24 10:05:13.996776] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:01.005 [2024-04-24 10:05:13.996880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159280 ] 00:10:01.005 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.005 [2024-04-24 10:05:14.073039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.005 [2024-04-24 10:05:14.153190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.381 10:05:15 -- accel/accel.sh@18 -- # out=' 00:10:02.381 SPDK Configuration: 00:10:02.381 Core mask: 0x1 00:10:02.381 00:10:02.381 Accel Perf Configuration: 00:10:02.381 Workload Type: copy 00:10:02.381 Transfer size: 4096 bytes 00:10:02.381 Vector count 1 00:10:02.381 Module: software 00:10:02.381 Queue depth: 32 00:10:02.381 Allocate depth: 32 00:10:02.381 # threads/core: 1 00:10:02.381 Run time: 1 seconds 00:10:02.381 Verify: Yes 00:10:02.381 00:10:02.381 Running for 1 seconds... 00:10:02.381 00:10:02.381 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:02.381 ------------------------------------------------------------------------------------ 00:10:02.382 0,0 551168/s 2153 MiB/s 0 0 00:10:02.382 ==================================================================================== 00:10:02.382 Total 551168/s 2153 MiB/s 0 0' 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:02.382 10:05:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:02.382 10:05:15 -- accel/accel.sh@12 -- # build_accel_config 00:10:02.382 10:05:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:02.382 10:05:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.382 10:05:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.382 10:05:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:02.382 10:05:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:02.382 10:05:15 -- accel/accel.sh@41 -- # local IFS=, 00:10:02.382 10:05:15 -- accel/accel.sh@42 -- # jq -r . 00:10:02.382 [2024-04-24 10:05:15.368681] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:02.382 [2024-04-24 10:05:15.368778] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159461 ] 00:10:02.382 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.382 [2024-04-24 10:05:15.444151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.382 [2024-04-24 10:05:15.524117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val= 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val= 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val=0x1 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val= 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val= 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val=copy 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val= 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val=software 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@23 -- # accel_module=software 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val=32 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val=32 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val=1 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val=Yes 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val= 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:02.382 10:05:15 -- accel/accel.sh@21 -- # val= 00:10:02.382 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:10:02.382 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:10:03.782 10:05:16 -- accel/accel.sh@21 -- # val= 00:10:03.782 10:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.782 10:05:16 -- accel/accel.sh@20 -- # IFS=: 00:10:03.782 10:05:16 -- accel/accel.sh@20 -- # read -r var val 00:10:03.782 10:05:16 -- accel/accel.sh@21 -- # val= 00:10:03.782 10:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.782 10:05:16 -- accel/accel.sh@20 -- # IFS=: 00:10:03.782 10:05:16 -- accel/accel.sh@20 -- # read -r var val 00:10:03.782 10:05:16 -- accel/accel.sh@21 -- # val= 00:10:03.782 10:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.782 10:05:16 -- accel/accel.sh@20 -- # IFS=: 00:10:03.782 10:05:16 -- accel/accel.sh@20 -- # read -r var val 00:10:03.782 10:05:16 -- accel/accel.sh@21 -- # val= 00:10:03.782 10:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.782 10:05:16 -- accel/accel.sh@20 -- # IFS=: 00:10:03.782 10:05:16 -- accel/accel.sh@20 -- # read -r var val 00:10:03.782 10:05:16 -- accel/accel.sh@21 -- # val= 00:10:03.782 10:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.782 10:05:16 -- accel/accel.sh@20 -- # IFS=: 00:10:03.782 10:05:16 -- accel/accel.sh@20 -- # read -r var val 00:10:03.782 10:05:16 -- accel/accel.sh@21 -- # val= 00:10:03.782 10:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.782 10:05:16 -- accel/accel.sh@20 -- # IFS=: 00:10:03.782 10:05:16 -- accel/accel.sh@20 -- # read -r var val 00:10:03.782 10:05:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:03.782 10:05:16 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:03.782 10:05:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:03.782 00:10:03.782 real 0m2.745s 00:10:03.782 user 0m2.452s 00:10:03.782 sys 0m0.289s 00:10:03.782 10:05:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.782 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:10:03.782 ************************************ 00:10:03.782 END TEST accel_copy 00:10:03.782 ************************************ 00:10:03.782 10:05:16 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:03.782 10:05:16 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:03.782 10:05:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:03.782 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:10:03.782 ************************************ 00:10:03.782 START TEST accel_fill 00:10:03.782 ************************************ 00:10:03.782 10:05:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:03.782 10:05:16 -- accel/accel.sh@16 -- # local accel_opc 00:10:03.782 10:05:16 -- accel/accel.sh@17 -- # local accel_module 00:10:03.782 10:05:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:03.782 10:05:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:03.782 10:05:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:03.782 10:05:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:03.782 10:05:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:03.782 10:05:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:03.782 10:05:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:03.782 10:05:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:03.782 10:05:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:03.782 10:05:16 -- accel/accel.sh@42 -- # jq -r . 00:10:03.782 [2024-04-24 10:05:16.783637] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:03.782 [2024-04-24 10:05:16.783740] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159658 ] 00:10:03.782 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.782 [2024-04-24 10:05:16.857309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.782 [2024-04-24 10:05:16.937440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.161 10:05:18 -- accel/accel.sh@18 -- # out=' 00:10:05.161 SPDK Configuration: 00:10:05.161 Core mask: 0x1 00:10:05.161 00:10:05.161 Accel Perf Configuration: 00:10:05.161 Workload Type: fill 00:10:05.161 Fill pattern: 0x80 00:10:05.161 Transfer size: 4096 bytes 00:10:05.161 Vector count 1 00:10:05.161 Module: software 00:10:05.161 Queue depth: 64 00:10:05.161 Allocate depth: 64 00:10:05.161 # threads/core: 1 00:10:05.161 Run time: 1 seconds 00:10:05.161 Verify: Yes 00:10:05.161 00:10:05.161 Running for 1 seconds... 00:10:05.161 00:10:05.161 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:05.161 ------------------------------------------------------------------------------------ 00:10:05.161 0,0 960832/s 3753 MiB/s 0 0 00:10:05.161 ==================================================================================== 00:10:05.161 Total 960832/s 3753 MiB/s 0 0' 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:05.161 10:05:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:05.161 10:05:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:05.161 10:05:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:05.161 10:05:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:05.161 10:05:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:05.161 10:05:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:05.161 10:05:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:05.161 10:05:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:05.161 10:05:18 -- accel/accel.sh@42 -- # jq -r . 00:10:05.161 [2024-04-24 10:05:18.151832] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:05.161 [2024-04-24 10:05:18.151928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159842 ] 00:10:05.161 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.161 [2024-04-24 10:05:18.227297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.161 [2024-04-24 10:05:18.307480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val= 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val= 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val=0x1 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val= 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val= 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val=fill 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val=0x80 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val= 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val=software 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@23 -- # accel_module=software 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val=64 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val=64 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val=1 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val=Yes 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val= 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 10:05:18 -- accel/accel.sh@21 -- # val= 00:10:05.161 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:06.539 10:05:19 -- accel/accel.sh@21 -- # val= 00:10:06.539 10:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.539 10:05:19 -- accel/accel.sh@20 -- # IFS=: 00:10:06.539 10:05:19 -- accel/accel.sh@20 -- # read -r var val 00:10:06.539 10:05:19 -- accel/accel.sh@21 -- # val= 00:10:06.539 10:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.539 10:05:19 -- accel/accel.sh@20 -- # IFS=: 00:10:06.539 10:05:19 -- accel/accel.sh@20 -- # read -r var val 00:10:06.539 10:05:19 -- accel/accel.sh@21 -- # val= 00:10:06.539 10:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.539 10:05:19 -- accel/accel.sh@20 -- # IFS=: 00:10:06.539 10:05:19 -- accel/accel.sh@20 -- # read -r var val 00:10:06.539 10:05:19 -- accel/accel.sh@21 -- # val= 00:10:06.539 10:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.539 10:05:19 -- accel/accel.sh@20 -- # IFS=: 00:10:06.539 10:05:19 -- accel/accel.sh@20 -- # read -r var val 00:10:06.539 10:05:19 -- accel/accel.sh@21 -- # val= 00:10:06.539 10:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.539 10:05:19 -- accel/accel.sh@20 -- # IFS=: 00:10:06.539 10:05:19 -- accel/accel.sh@20 -- # read -r var val 00:10:06.539 10:05:19 -- accel/accel.sh@21 -- # val= 00:10:06.539 10:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.539 10:05:19 -- accel/accel.sh@20 -- # IFS=: 00:10:06.539 10:05:19 -- accel/accel.sh@20 -- # read -r var val 00:10:06.539 10:05:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:06.539 10:05:19 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:06.539 10:05:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:06.539 00:10:06.539 real 0m2.739s 00:10:06.539 user 0m2.456s 00:10:06.539 sys 0m0.280s 00:10:06.539 10:05:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.539 10:05:19 -- common/autotest_common.sh@10 -- # set +x 00:10:06.539 ************************************ 00:10:06.539 END TEST accel_fill 00:10:06.539 ************************************ 00:10:06.539 10:05:19 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:06.539 10:05:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:06.539 10:05:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:06.539 10:05:19 -- common/autotest_common.sh@10 -- # set +x 00:10:06.539 ************************************ 00:10:06.539 START TEST accel_copy_crc32c 00:10:06.539 ************************************ 00:10:06.539 10:05:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:10:06.539 10:05:19 -- accel/accel.sh@16 -- # local accel_opc 00:10:06.539 10:05:19 -- accel/accel.sh@17 -- # local accel_module 00:10:06.539 10:05:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:06.539 10:05:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:06.539 10:05:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:06.539 10:05:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:06.540 10:05:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:06.540 10:05:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:06.540 10:05:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:06.540 10:05:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:06.540 10:05:19 -- accel/accel.sh@41 -- # local IFS=, 00:10:06.540 10:05:19 -- accel/accel.sh@42 -- # jq -r . 00:10:06.540 [2024-04-24 10:05:19.563271] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:06.540 [2024-04-24 10:05:19.563367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160040 ] 00:10:06.540 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.540 [2024-04-24 10:05:19.639134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.540 [2024-04-24 10:05:19.719056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.041 10:05:20 -- accel/accel.sh@18 -- # out=' 00:10:08.041 SPDK Configuration: 00:10:08.041 Core mask: 0x1 00:10:08.041 00:10:08.041 Accel Perf Configuration: 00:10:08.041 Workload Type: copy_crc32c 00:10:08.041 CRC-32C seed: 0 00:10:08.041 Vector size: 4096 bytes 00:10:08.041 Transfer size: 4096 bytes 00:10:08.041 Vector count 1 00:10:08.041 Module: software 00:10:08.041 Queue depth: 32 00:10:08.041 Allocate depth: 32 00:10:08.041 # threads/core: 1 00:10:08.041 Run time: 1 seconds 00:10:08.041 Verify: Yes 00:10:08.041 00:10:08.041 Running for 1 seconds... 00:10:08.041 00:10:08.041 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:08.041 ------------------------------------------------------------------------------------ 00:10:08.041 0,0 395360/s 1544 MiB/s 0 0 00:10:08.041 ==================================================================================== 00:10:08.041 Total 395360/s 1544 MiB/s 0 0' 00:10:08.041 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:08.041 10:05:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:08.041 10:05:20 -- accel/accel.sh@12 -- # build_accel_config 00:10:08.041 10:05:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:08.041 10:05:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:08.041 10:05:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:08.041 10:05:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:08.041 10:05:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:08.041 10:05:20 -- accel/accel.sh@41 -- # local IFS=, 00:10:08.041 10:05:20 -- accel/accel.sh@42 -- # jq -r . 00:10:08.041 [2024-04-24 10:05:20.926256] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:08.041 [2024-04-24 10:05:20.926349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160228 ] 00:10:08.041 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.041 [2024-04-24 10:05:21.000156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.041 [2024-04-24 10:05:21.076550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val= 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val= 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val=0x1 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val= 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val= 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val=0 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val= 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val=software 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@23 -- # accel_module=software 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val=32 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val=32 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val=1 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val=Yes 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val= 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.041 10:05:21 -- accel/accel.sh@21 -- # val= 00:10:08.041 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.041 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.977 10:05:22 -- accel/accel.sh@21 -- # val= 00:10:08.977 10:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.977 10:05:22 -- accel/accel.sh@20 -- # IFS=: 00:10:08.977 10:05:22 -- accel/accel.sh@20 -- # read -r var val 00:10:08.977 10:05:22 -- accel/accel.sh@21 -- # val= 00:10:08.977 10:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.977 10:05:22 -- accel/accel.sh@20 -- # IFS=: 00:10:08.977 10:05:22 -- accel/accel.sh@20 -- # read -r var val 00:10:08.977 10:05:22 -- accel/accel.sh@21 -- # val= 00:10:08.977 10:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.977 10:05:22 -- accel/accel.sh@20 -- # IFS=: 00:10:08.977 10:05:22 -- accel/accel.sh@20 -- # read -r var val 00:10:08.977 10:05:22 -- accel/accel.sh@21 -- # val= 00:10:08.977 10:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.977 10:05:22 -- accel/accel.sh@20 -- # IFS=: 00:10:08.977 10:05:22 -- accel/accel.sh@20 -- # read -r var val 00:10:08.977 10:05:22 -- accel/accel.sh@21 -- # val= 00:10:08.977 10:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.977 10:05:22 -- accel/accel.sh@20 -- # IFS=: 00:10:08.977 10:05:22 -- accel/accel.sh@20 -- # read -r var val 00:10:08.977 10:05:22 -- accel/accel.sh@21 -- # val= 00:10:08.977 10:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.977 10:05:22 -- accel/accel.sh@20 -- # IFS=: 00:10:08.977 10:05:22 -- accel/accel.sh@20 -- # read -r var val 00:10:08.977 10:05:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:08.977 10:05:22 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:08.977 10:05:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:08.977 00:10:08.977 real 0m2.711s 00:10:08.977 user 0m2.438s 00:10:08.977 sys 0m0.271s 00:10:08.977 10:05:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.977 10:05:22 -- common/autotest_common.sh@10 -- # set +x 00:10:08.977 ************************************ 00:10:08.977 END TEST accel_copy_crc32c 00:10:08.977 ************************************ 00:10:09.242 10:05:22 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:09.242 10:05:22 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:09.242 10:05:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:09.242 10:05:22 -- common/autotest_common.sh@10 -- # set +x 00:10:09.242 ************************************ 00:10:09.242 START TEST accel_copy_crc32c_C2 00:10:09.242 ************************************ 00:10:09.242 10:05:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:09.242 10:05:22 -- accel/accel.sh@16 -- # local accel_opc 00:10:09.242 10:05:22 -- accel/accel.sh@17 -- # local accel_module 00:10:09.242 10:05:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:09.242 10:05:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:09.242 10:05:22 -- accel/accel.sh@12 -- # build_accel_config 00:10:09.242 10:05:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:09.242 10:05:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.242 10:05:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.242 10:05:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:09.242 10:05:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:09.242 10:05:22 -- accel/accel.sh@41 -- # local IFS=, 00:10:09.242 10:05:22 -- accel/accel.sh@42 -- # jq -r . 00:10:09.242 [2024-04-24 10:05:22.318855] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:09.242 [2024-04-24 10:05:22.318948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160422 ] 00:10:09.242 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.242 [2024-04-24 10:05:22.393312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.242 [2024-04-24 10:05:22.473795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.621 10:05:23 -- accel/accel.sh@18 -- # out=' 00:10:10.621 SPDK Configuration: 00:10:10.621 Core mask: 0x1 00:10:10.621 00:10:10.621 Accel Perf Configuration: 00:10:10.621 Workload Type: copy_crc32c 00:10:10.621 CRC-32C seed: 0 00:10:10.621 Vector size: 4096 bytes 00:10:10.621 Transfer size: 8192 bytes 00:10:10.621 Vector count 2 00:10:10.621 Module: software 00:10:10.621 Queue depth: 32 00:10:10.621 Allocate depth: 32 00:10:10.621 # threads/core: 1 00:10:10.621 Run time: 1 seconds 00:10:10.621 Verify: Yes 00:10:10.621 00:10:10.621 Running for 1 seconds... 00:10:10.621 00:10:10.621 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:10.621 ------------------------------------------------------------------------------------ 00:10:10.621 0,0 287168/s 2243 MiB/s 0 0 00:10:10.621 ==================================================================================== 00:10:10.621 Total 287168/s 1121 MiB/s 0 0' 00:10:10.621 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.621 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.621 10:05:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:10.621 10:05:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:10.621 10:05:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:10.621 10:05:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:10.621 10:05:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:10.621 10:05:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:10.622 10:05:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:10.622 10:05:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:10.622 10:05:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:10.622 10:05:23 -- accel/accel.sh@42 -- # jq -r . 00:10:10.622 [2024-04-24 10:05:23.682327] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:10.622 [2024-04-24 10:05:23.682430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160604 ] 00:10:10.622 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.622 [2024-04-24 10:05:23.755405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.622 [2024-04-24 10:05:23.835219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val=0x1 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val=0 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val=software 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@23 -- # accel_module=software 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val=32 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val=32 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val=1 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val=Yes 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.622 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.622 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.622 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:12.000 10:05:25 -- accel/accel.sh@21 -- # val= 00:10:12.000 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.000 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.000 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.000 10:05:25 -- accel/accel.sh@21 -- # val= 00:10:12.000 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.000 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.000 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.000 10:05:25 -- accel/accel.sh@21 -- # val= 00:10:12.000 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.000 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.000 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.000 10:05:25 -- accel/accel.sh@21 -- # val= 00:10:12.000 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.000 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.000 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.000 10:05:25 -- accel/accel.sh@21 -- # val= 00:10:12.000 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.000 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.000 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.000 10:05:25 -- accel/accel.sh@21 -- # val= 00:10:12.000 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.000 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.000 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.000 10:05:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:12.000 10:05:25 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:12.000 10:05:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:12.000 00:10:12.000 real 0m2.732s 00:10:12.000 user 0m2.442s 00:10:12.000 sys 0m0.288s 00:10:12.000 10:05:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.000 10:05:25 -- common/autotest_common.sh@10 -- # set +x 00:10:12.000 ************************************ 00:10:12.000 END TEST accel_copy_crc32c_C2 00:10:12.000 ************************************ 00:10:12.000 10:05:25 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:12.000 10:05:25 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:12.000 10:05:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:12.000 10:05:25 -- common/autotest_common.sh@10 -- # set +x 00:10:12.000 ************************************ 00:10:12.000 START TEST accel_dualcast 00:10:12.000 ************************************ 00:10:12.000 10:05:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:10:12.000 10:05:25 -- accel/accel.sh@16 -- # local accel_opc 00:10:12.000 10:05:25 -- accel/accel.sh@17 -- # local accel_module 00:10:12.000 10:05:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:12.001 10:05:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:12.001 10:05:25 -- accel/accel.sh@12 -- # build_accel_config 00:10:12.001 10:05:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:12.001 10:05:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:12.001 10:05:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:12.001 10:05:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:12.001 10:05:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:12.001 10:05:25 -- accel/accel.sh@41 -- # local IFS=, 00:10:12.001 10:05:25 -- accel/accel.sh@42 -- # jq -r . 00:10:12.001 [2024-04-24 10:05:25.093131] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:12.001 [2024-04-24 10:05:25.093229] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160804 ] 00:10:12.001 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.001 [2024-04-24 10:05:25.166858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.001 [2024-04-24 10:05:25.248169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.377 10:05:26 -- accel/accel.sh@18 -- # out=' 00:10:13.377 SPDK Configuration: 00:10:13.377 Core mask: 0x1 00:10:13.377 00:10:13.377 Accel Perf Configuration: 00:10:13.377 Workload Type: dualcast 00:10:13.377 Transfer size: 4096 bytes 00:10:13.377 Vector count 1 00:10:13.377 Module: software 00:10:13.377 Queue depth: 32 00:10:13.377 Allocate depth: 32 00:10:13.377 # threads/core: 1 00:10:13.377 Run time: 1 seconds 00:10:13.377 Verify: Yes 00:10:13.377 00:10:13.377 Running for 1 seconds... 00:10:13.377 00:10:13.377 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:13.377 ------------------------------------------------------------------------------------ 00:10:13.377 0,0 618592/s 2416 MiB/s 0 0 00:10:13.377 ==================================================================================== 00:10:13.377 Total 618592/s 2416 MiB/s 0 0' 00:10:13.377 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.377 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.377 10:05:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:13.377 10:05:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:13.377 10:05:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:13.377 10:05:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:13.377 10:05:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:13.377 10:05:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.377 10:05:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:13.377 10:05:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:13.377 10:05:26 -- accel/accel.sh@41 -- # local IFS=, 00:10:13.377 10:05:26 -- accel/accel.sh@42 -- # jq -r . 00:10:13.377 [2024-04-24 10:05:26.461980] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:13.377 [2024-04-24 10:05:26.462150] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160984 ] 00:10:13.377 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.377 [2024-04-24 10:05:26.536968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.377 [2024-04-24 10:05:26.617382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.635 10:05:26 -- accel/accel.sh@21 -- # val= 00:10:13.635 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.635 10:05:26 -- accel/accel.sh@21 -- # val= 00:10:13.635 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.635 10:05:26 -- accel/accel.sh@21 -- # val=0x1 00:10:13.635 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.635 10:05:26 -- accel/accel.sh@21 -- # val= 00:10:13.635 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.635 10:05:26 -- accel/accel.sh@21 -- # val= 00:10:13.635 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.635 10:05:26 -- accel/accel.sh@21 -- # val=dualcast 00:10:13.635 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.635 10:05:26 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.635 10:05:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:13.635 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.635 10:05:26 -- accel/accel.sh@21 -- # val= 00:10:13.635 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.635 10:05:26 -- accel/accel.sh@21 -- # val=software 00:10:13.635 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.635 10:05:26 -- accel/accel.sh@23 -- # accel_module=software 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.635 10:05:26 -- accel/accel.sh@21 -- # val=32 00:10:13.635 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.635 10:05:26 -- accel/accel.sh@21 -- # val=32 00:10:13.635 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.635 10:05:26 -- accel/accel.sh@21 -- # val=1 00:10:13.635 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.635 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.636 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.636 10:05:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:13.636 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.636 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.636 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.636 10:05:26 -- accel/accel.sh@21 -- # val=Yes 00:10:13.636 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.636 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.636 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.636 10:05:26 -- accel/accel.sh@21 -- # val= 00:10:13.636 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.636 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.636 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:13.636 10:05:26 -- accel/accel.sh@21 -- # val= 00:10:13.636 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.636 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:10:13.636 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:10:14.570 10:05:27 -- accel/accel.sh@21 -- # val= 00:10:14.570 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.570 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:14.570 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:14.570 10:05:27 -- accel/accel.sh@21 -- # val= 00:10:14.570 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.570 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:14.570 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:14.570 10:05:27 -- accel/accel.sh@21 -- # val= 00:10:14.570 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.570 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:14.570 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:14.570 10:05:27 -- accel/accel.sh@21 -- # val= 00:10:14.570 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.570 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:14.570 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:14.570 10:05:27 -- accel/accel.sh@21 -- # val= 00:10:14.570 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.570 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:14.570 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:14.570 10:05:27 -- accel/accel.sh@21 -- # val= 00:10:14.570 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.570 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:14.570 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:14.570 10:05:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:14.570 10:05:27 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:14.570 10:05:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:14.570 00:10:14.570 real 0m2.739s 00:10:14.570 user 0m2.446s 00:10:14.570 sys 0m0.291s 00:10:14.570 10:05:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.570 10:05:27 -- common/autotest_common.sh@10 -- # set +x 00:10:14.570 ************************************ 00:10:14.570 END TEST accel_dualcast 00:10:14.570 ************************************ 00:10:14.829 10:05:27 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:14.829 10:05:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:14.829 10:05:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:14.829 10:05:27 -- common/autotest_common.sh@10 -- # set +x 00:10:14.829 ************************************ 00:10:14.829 START TEST accel_compare 00:10:14.829 ************************************ 00:10:14.829 10:05:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:10:14.829 10:05:27 -- accel/accel.sh@16 -- # local accel_opc 00:10:14.829 10:05:27 -- accel/accel.sh@17 -- # local accel_module 00:10:14.829 10:05:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:14.829 10:05:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:14.829 10:05:27 -- accel/accel.sh@12 -- # build_accel_config 00:10:14.829 10:05:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:14.829 10:05:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:14.829 10:05:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:14.829 10:05:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:14.829 10:05:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:14.829 10:05:27 -- accel/accel.sh@41 -- # local IFS=, 00:10:14.829 10:05:27 -- accel/accel.sh@42 -- # jq -r . 00:10:14.829 [2024-04-24 10:05:27.882191] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:14.829 [2024-04-24 10:05:27.882289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161177 ] 00:10:14.829 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.829 [2024-04-24 10:05:27.960202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.829 [2024-04-24 10:05:28.042470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.205 10:05:29 -- accel/accel.sh@18 -- # out=' 00:10:16.205 SPDK Configuration: 00:10:16.205 Core mask: 0x1 00:10:16.205 00:10:16.205 Accel Perf Configuration: 00:10:16.205 Workload Type: compare 00:10:16.205 Transfer size: 4096 bytes 00:10:16.205 Vector count 1 00:10:16.205 Module: software 00:10:16.205 Queue depth: 32 00:10:16.205 Allocate depth: 32 00:10:16.205 # threads/core: 1 00:10:16.205 Run time: 1 seconds 00:10:16.205 Verify: Yes 00:10:16.205 00:10:16.205 Running for 1 seconds... 00:10:16.205 00:10:16.205 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:16.205 ------------------------------------------------------------------------------------ 00:10:16.205 0,0 811616/s 3170 MiB/s 0 0 00:10:16.205 ==================================================================================== 00:10:16.205 Total 811616/s 3170 MiB/s 0 0' 00:10:16.205 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.205 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.205 10:05:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:16.205 10:05:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:16.205 10:05:29 -- accel/accel.sh@12 -- # build_accel_config 00:10:16.205 10:05:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:16.206 10:05:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.206 10:05:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.206 10:05:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:16.206 10:05:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:16.206 10:05:29 -- accel/accel.sh@41 -- # local IFS=, 00:10:16.206 10:05:29 -- accel/accel.sh@42 -- # jq -r . 00:10:16.206 [2024-04-24 10:05:29.255828] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:16.206 [2024-04-24 10:05:29.255923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161366 ] 00:10:16.206 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.206 [2024-04-24 10:05:29.329738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.206 [2024-04-24 10:05:29.410302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val= 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val= 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val=0x1 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val= 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val= 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val=compare 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val= 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val=software 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@23 -- # accel_module=software 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val=32 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val=32 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val=1 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val=Yes 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val= 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.206 10:05:29 -- accel/accel.sh@21 -- # val= 00:10:16.206 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.206 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:17.581 10:05:30 -- accel/accel.sh@21 -- # val= 00:10:17.581 10:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.581 10:05:30 -- accel/accel.sh@20 -- # IFS=: 00:10:17.581 10:05:30 -- accel/accel.sh@20 -- # read -r var val 00:10:17.581 10:05:30 -- accel/accel.sh@21 -- # val= 00:10:17.581 10:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.581 10:05:30 -- accel/accel.sh@20 -- # IFS=: 00:10:17.581 10:05:30 -- accel/accel.sh@20 -- # read -r var val 00:10:17.581 10:05:30 -- accel/accel.sh@21 -- # val= 00:10:17.581 10:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.581 10:05:30 -- accel/accel.sh@20 -- # IFS=: 00:10:17.581 10:05:30 -- accel/accel.sh@20 -- # read -r var val 00:10:17.581 10:05:30 -- accel/accel.sh@21 -- # val= 00:10:17.581 10:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.581 10:05:30 -- accel/accel.sh@20 -- # IFS=: 00:10:17.581 10:05:30 -- accel/accel.sh@20 -- # read -r var val 00:10:17.582 10:05:30 -- accel/accel.sh@21 -- # val= 00:10:17.582 10:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.582 10:05:30 -- accel/accel.sh@20 -- # IFS=: 00:10:17.582 10:05:30 -- accel/accel.sh@20 -- # read -r var val 00:10:17.582 10:05:30 -- accel/accel.sh@21 -- # val= 00:10:17.582 10:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.582 10:05:30 -- accel/accel.sh@20 -- # IFS=: 00:10:17.582 10:05:30 -- accel/accel.sh@20 -- # read -r var val 00:10:17.582 10:05:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:17.582 10:05:30 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:17.582 10:05:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:17.582 00:10:17.582 real 0m2.744s 00:10:17.582 user 0m2.455s 00:10:17.582 sys 0m0.286s 00:10:17.582 10:05:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.582 10:05:30 -- common/autotest_common.sh@10 -- # set +x 00:10:17.582 ************************************ 00:10:17.582 END TEST accel_compare 00:10:17.582 ************************************ 00:10:17.582 10:05:30 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:17.582 10:05:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:17.582 10:05:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:17.582 10:05:30 -- common/autotest_common.sh@10 -- # set +x 00:10:17.582 ************************************ 00:10:17.582 START TEST accel_xor 00:10:17.582 ************************************ 00:10:17.582 10:05:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:10:17.582 10:05:30 -- accel/accel.sh@16 -- # local accel_opc 00:10:17.582 10:05:30 -- accel/accel.sh@17 -- # local accel_module 00:10:17.582 10:05:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:17.582 10:05:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:17.582 10:05:30 -- accel/accel.sh@12 -- # build_accel_config 00:10:17.582 10:05:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:17.582 10:05:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.582 10:05:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.582 10:05:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:17.582 10:05:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:17.582 10:05:30 -- accel/accel.sh@41 -- # local IFS=, 00:10:17.582 10:05:30 -- accel/accel.sh@42 -- # jq -r . 00:10:17.582 [2024-04-24 10:05:30.657718] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:17.582 [2024-04-24 10:05:30.657790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161563 ] 00:10:17.582 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.582 [2024-04-24 10:05:30.729484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.582 [2024-04-24 10:05:30.806407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.959 10:05:31 -- accel/accel.sh@18 -- # out=' 00:10:18.959 SPDK Configuration: 00:10:18.959 Core mask: 0x1 00:10:18.959 00:10:18.959 Accel Perf Configuration: 00:10:18.959 Workload Type: xor 00:10:18.959 Source buffers: 2 00:10:18.959 Transfer size: 4096 bytes 00:10:18.959 Vector count 1 00:10:18.959 Module: software 00:10:18.959 Queue depth: 32 00:10:18.959 Allocate depth: 32 00:10:18.959 # threads/core: 1 00:10:18.959 Run time: 1 seconds 00:10:18.959 Verify: Yes 00:10:18.959 00:10:18.959 Running for 1 seconds... 00:10:18.959 00:10:18.959 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:18.959 ------------------------------------------------------------------------------------ 00:10:18.959 0,0 675584/s 2639 MiB/s 0 0 00:10:18.959 ==================================================================================== 00:10:18.959 Total 675584/s 2639 MiB/s 0 0' 00:10:18.959 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:18.959 10:05:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:18.959 10:05:31 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.959 10:05:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.959 10:05:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.959 10:05:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.959 10:05:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.959 10:05:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.959 10:05:31 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.959 10:05:31 -- accel/accel.sh@42 -- # jq -r . 00:10:18.959 [2024-04-24 10:05:32.005083] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:18.959 [2024-04-24 10:05:32.005169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161743 ] 00:10:18.959 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.959 [2024-04-24 10:05:32.081952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.959 [2024-04-24 10:05:32.161050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val= 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val= 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val=0x1 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val= 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val= 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val=xor 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val=2 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val= 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val=software 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@23 -- # accel_module=software 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val=32 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val=32 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val=1 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val=Yes 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val= 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.959 10:05:32 -- accel/accel.sh@21 -- # val= 00:10:18.959 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.959 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:20.343 10:05:33 -- accel/accel.sh@21 -- # val= 00:10:20.343 10:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.343 10:05:33 -- accel/accel.sh@20 -- # IFS=: 00:10:20.343 10:05:33 -- accel/accel.sh@20 -- # read -r var val 00:10:20.343 10:05:33 -- accel/accel.sh@21 -- # val= 00:10:20.343 10:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.343 10:05:33 -- accel/accel.sh@20 -- # IFS=: 00:10:20.343 10:05:33 -- accel/accel.sh@20 -- # read -r var val 00:10:20.343 10:05:33 -- accel/accel.sh@21 -- # val= 00:10:20.343 10:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.343 10:05:33 -- accel/accel.sh@20 -- # IFS=: 00:10:20.343 10:05:33 -- accel/accel.sh@20 -- # read -r var val 00:10:20.343 10:05:33 -- accel/accel.sh@21 -- # val= 00:10:20.343 10:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.343 10:05:33 -- accel/accel.sh@20 -- # IFS=: 00:10:20.343 10:05:33 -- accel/accel.sh@20 -- # read -r var val 00:10:20.343 10:05:33 -- accel/accel.sh@21 -- # val= 00:10:20.343 10:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.343 10:05:33 -- accel/accel.sh@20 -- # IFS=: 00:10:20.343 10:05:33 -- accel/accel.sh@20 -- # read -r var val 00:10:20.343 10:05:33 -- accel/accel.sh@21 -- # val= 00:10:20.343 10:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.343 10:05:33 -- accel/accel.sh@20 -- # IFS=: 00:10:20.343 10:05:33 -- accel/accel.sh@20 -- # read -r var val 00:10:20.343 10:05:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:20.343 10:05:33 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:20.343 10:05:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:20.343 00:10:20.343 real 0m2.696s 00:10:20.343 user 0m2.434s 00:10:20.343 sys 0m0.259s 00:10:20.343 10:05:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.343 10:05:33 -- common/autotest_common.sh@10 -- # set +x 00:10:20.343 ************************************ 00:10:20.343 END TEST accel_xor 00:10:20.343 ************************************ 00:10:20.343 10:05:33 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:20.343 10:05:33 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:20.343 10:05:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:20.343 10:05:33 -- common/autotest_common.sh@10 -- # set +x 00:10:20.343 ************************************ 00:10:20.343 START TEST accel_xor 00:10:20.343 ************************************ 00:10:20.343 10:05:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:10:20.343 10:05:33 -- accel/accel.sh@16 -- # local accel_opc 00:10:20.343 10:05:33 -- accel/accel.sh@17 -- # local accel_module 00:10:20.343 10:05:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:20.343 10:05:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:20.343 10:05:33 -- accel/accel.sh@12 -- # build_accel_config 00:10:20.343 10:05:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:20.343 10:05:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.343 10:05:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.343 10:05:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:20.343 10:05:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:20.344 10:05:33 -- accel/accel.sh@41 -- # local IFS=, 00:10:20.344 10:05:33 -- accel/accel.sh@42 -- # jq -r . 00:10:20.344 [2024-04-24 10:05:33.406566] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:20.344 [2024-04-24 10:05:33.406653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161946 ] 00:10:20.344 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.344 [2024-04-24 10:05:33.483779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.344 [2024-04-24 10:05:33.564445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.717 10:05:34 -- accel/accel.sh@18 -- # out=' 00:10:21.717 SPDK Configuration: 00:10:21.717 Core mask: 0x1 00:10:21.717 00:10:21.717 Accel Perf Configuration: 00:10:21.717 Workload Type: xor 00:10:21.717 Source buffers: 3 00:10:21.717 Transfer size: 4096 bytes 00:10:21.717 Vector count 1 00:10:21.717 Module: software 00:10:21.717 Queue depth: 32 00:10:21.717 Allocate depth: 32 00:10:21.717 # threads/core: 1 00:10:21.717 Run time: 1 seconds 00:10:21.717 Verify: Yes 00:10:21.717 00:10:21.717 Running for 1 seconds... 00:10:21.717 00:10:21.717 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:21.717 ------------------------------------------------------------------------------------ 00:10:21.717 0,0 665056/s 2597 MiB/s 0 0 00:10:21.717 ==================================================================================== 00:10:21.717 Total 665056/s 2597 MiB/s 0 0' 00:10:21.717 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.717 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.717 10:05:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:21.717 10:05:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:21.717 10:05:34 -- accel/accel.sh@12 -- # build_accel_config 00:10:21.717 10:05:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:21.717 10:05:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.717 10:05:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.717 10:05:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:21.717 10:05:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:21.717 10:05:34 -- accel/accel.sh@41 -- # local IFS=, 00:10:21.717 10:05:34 -- accel/accel.sh@42 -- # jq -r . 00:10:21.717 [2024-04-24 10:05:34.778146] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:21.717 [2024-04-24 10:05:34.778237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162126 ] 00:10:21.717 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.717 [2024-04-24 10:05:34.855496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.717 [2024-04-24 10:05:34.936133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.717 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.717 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.717 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.717 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.717 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val=0x1 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val=xor 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val=3 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val=software 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@23 -- # accel_module=software 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val=32 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val=32 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val=1 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val=Yes 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.718 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.718 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.718 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:23.093 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.093 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.093 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.093 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.093 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.093 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.093 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.093 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.093 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.093 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.093 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.093 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.093 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.093 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.093 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.093 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.093 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.093 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.093 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.093 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.093 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.093 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.093 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.093 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.093 10:05:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:23.093 10:05:36 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:23.093 10:05:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:23.093 00:10:23.093 real 0m2.745s 00:10:23.093 user 0m2.458s 00:10:23.093 sys 0m0.284s 00:10:23.093 10:05:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.093 10:05:36 -- common/autotest_common.sh@10 -- # set +x 00:10:23.093 ************************************ 00:10:23.093 END TEST accel_xor 00:10:23.093 ************************************ 00:10:23.093 10:05:36 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:23.093 10:05:36 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:23.093 10:05:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:23.093 10:05:36 -- common/autotest_common.sh@10 -- # set +x 00:10:23.093 ************************************ 00:10:23.093 START TEST accel_dif_verify 00:10:23.093 ************************************ 00:10:23.093 10:05:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:10:23.093 10:05:36 -- accel/accel.sh@16 -- # local accel_opc 00:10:23.093 10:05:36 -- accel/accel.sh@17 -- # local accel_module 00:10:23.093 10:05:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:23.093 10:05:36 -- accel/accel.sh@12 -- # build_accel_config 00:10:23.093 10:05:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:23.093 10:05:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:23.093 10:05:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.093 10:05:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.093 10:05:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:23.093 10:05:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:23.093 10:05:36 -- accel/accel.sh@41 -- # local IFS=, 00:10:23.093 10:05:36 -- accel/accel.sh@42 -- # jq -r . 00:10:23.093 [2024-04-24 10:05:36.194537] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:23.093 [2024-04-24 10:05:36.194632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162319 ] 00:10:23.093 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.093 [2024-04-24 10:05:36.269935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.093 [2024-04-24 10:05:36.350182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.467 10:05:37 -- accel/accel.sh@18 -- # out=' 00:10:24.467 SPDK Configuration: 00:10:24.467 Core mask: 0x1 00:10:24.467 00:10:24.467 Accel Perf Configuration: 00:10:24.467 Workload Type: dif_verify 00:10:24.467 Vector size: 4096 bytes 00:10:24.467 Transfer size: 4096 bytes 00:10:24.467 Block size: 512 bytes 00:10:24.467 Metadata size: 8 bytes 00:10:24.467 Vector count 1 00:10:24.467 Module: software 00:10:24.467 Queue depth: 32 00:10:24.467 Allocate depth: 32 00:10:24.467 # threads/core: 1 00:10:24.467 Run time: 1 seconds 00:10:24.467 Verify: No 00:10:24.467 00:10:24.467 Running for 1 seconds... 00:10:24.467 00:10:24.467 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:24.467 ------------------------------------------------------------------------------------ 00:10:24.467 0,0 242144/s 960 MiB/s 0 0 00:10:24.468 ==================================================================================== 00:10:24.468 Total 242144/s 945 MiB/s 0 0' 00:10:24.468 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.468 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.468 10:05:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:24.468 10:05:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:24.468 10:05:37 -- accel/accel.sh@12 -- # build_accel_config 00:10:24.468 10:05:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:24.468 10:05:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:24.468 10:05:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:24.468 10:05:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:24.468 10:05:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:24.468 10:05:37 -- accel/accel.sh@41 -- # local IFS=, 00:10:24.468 10:05:37 -- accel/accel.sh@42 -- # jq -r . 00:10:24.468 [2024-04-24 10:05:37.565717] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:24.468 [2024-04-24 10:05:37.565814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162505 ] 00:10:24.468 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.468 [2024-04-24 10:05:37.642802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.468 [2024-04-24 10:05:37.723262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val= 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val= 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val=0x1 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val= 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val= 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val=dif_verify 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val= 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val=software 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@23 -- # accel_module=software 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val=32 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val=32 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val=1 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val=No 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val= 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:24.726 10:05:37 -- accel/accel.sh@21 -- # val= 00:10:24.726 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:24.726 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:25.661 10:05:38 -- accel/accel.sh@21 -- # val= 00:10:25.661 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.661 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:10:25.661 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:10:25.661 10:05:38 -- accel/accel.sh@21 -- # val= 00:10:25.661 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.661 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:10:25.661 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:10:25.661 10:05:38 -- accel/accel.sh@21 -- # val= 00:10:25.661 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.661 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:10:25.661 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:10:25.661 10:05:38 -- accel/accel.sh@21 -- # val= 00:10:25.661 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.661 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:10:25.661 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:10:25.661 10:05:38 -- accel/accel.sh@21 -- # val= 00:10:25.661 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.661 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:10:25.661 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:10:25.661 10:05:38 -- accel/accel.sh@21 -- # val= 00:10:25.661 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.661 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:10:25.661 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:10:25.661 10:05:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:25.661 10:05:38 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:25.661 10:05:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:25.661 00:10:25.661 real 0m2.746s 00:10:25.661 user 0m2.461s 00:10:25.661 sys 0m0.282s 00:10:25.661 10:05:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.661 10:05:38 -- common/autotest_common.sh@10 -- # set +x 00:10:25.661 ************************************ 00:10:25.661 END TEST accel_dif_verify 00:10:25.661 ************************************ 00:10:25.920 10:05:38 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:25.920 10:05:38 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:25.920 10:05:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.920 10:05:38 -- common/autotest_common.sh@10 -- # set +x 00:10:25.920 ************************************ 00:10:25.920 START TEST accel_dif_generate 00:10:25.920 ************************************ 00:10:25.920 10:05:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:10:25.920 10:05:38 -- accel/accel.sh@16 -- # local accel_opc 00:10:25.920 10:05:38 -- accel/accel.sh@17 -- # local accel_module 00:10:25.920 10:05:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:25.920 10:05:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:25.920 10:05:38 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.920 10:05:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.920 10:05:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.920 10:05:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.920 10:05:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.920 10:05:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.920 10:05:38 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.920 10:05:38 -- accel/accel.sh@42 -- # jq -r . 00:10:25.920 [2024-04-24 10:05:38.982870] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:25.920 [2024-04-24 10:05:38.982963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162701 ] 00:10:25.920 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.920 [2024-04-24 10:05:39.057685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.920 [2024-04-24 10:05:39.145705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.294 10:05:40 -- accel/accel.sh@18 -- # out=' 00:10:27.294 SPDK Configuration: 00:10:27.294 Core mask: 0x1 00:10:27.294 00:10:27.294 Accel Perf Configuration: 00:10:27.294 Workload Type: dif_generate 00:10:27.294 Vector size: 4096 bytes 00:10:27.294 Transfer size: 4096 bytes 00:10:27.294 Block size: 512 bytes 00:10:27.294 Metadata size: 8 bytes 00:10:27.294 Vector count 1 00:10:27.294 Module: software 00:10:27.294 Queue depth: 32 00:10:27.294 Allocate depth: 32 00:10:27.294 # threads/core: 1 00:10:27.294 Run time: 1 seconds 00:10:27.294 Verify: No 00:10:27.294 00:10:27.294 Running for 1 seconds... 00:10:27.294 00:10:27.295 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:27.295 ------------------------------------------------------------------------------------ 00:10:27.295 0,0 277408/s 1100 MiB/s 0 0 00:10:27.295 ==================================================================================== 00:10:27.295 Total 277408/s 1083 MiB/s 0 0' 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:27.295 10:05:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:27.295 10:05:40 -- accel/accel.sh@12 -- # build_accel_config 00:10:27.295 10:05:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:27.295 10:05:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.295 10:05:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.295 10:05:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:27.295 10:05:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:27.295 10:05:40 -- accel/accel.sh@41 -- # local IFS=, 00:10:27.295 10:05:40 -- accel/accel.sh@42 -- # jq -r . 00:10:27.295 [2024-04-24 10:05:40.364027] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:27.295 [2024-04-24 10:05:40.364130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162885 ] 00:10:27.295 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.295 [2024-04-24 10:05:40.441951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.295 [2024-04-24 10:05:40.522663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val=0x1 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val=dif_generate 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val=software 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@23 -- # accel_module=software 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val=32 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val=32 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val=1 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val=No 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:27.295 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:27.295 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:27.295 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:28.670 10:05:41 -- accel/accel.sh@21 -- # val= 00:10:28.670 10:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.670 10:05:41 -- accel/accel.sh@20 -- # IFS=: 00:10:28.670 10:05:41 -- accel/accel.sh@20 -- # read -r var val 00:10:28.670 10:05:41 -- accel/accel.sh@21 -- # val= 00:10:28.670 10:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.670 10:05:41 -- accel/accel.sh@20 -- # IFS=: 00:10:28.670 10:05:41 -- accel/accel.sh@20 -- # read -r var val 00:10:28.670 10:05:41 -- accel/accel.sh@21 -- # val= 00:10:28.670 10:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.670 10:05:41 -- accel/accel.sh@20 -- # IFS=: 00:10:28.670 10:05:41 -- accel/accel.sh@20 -- # read -r var val 00:10:28.670 10:05:41 -- accel/accel.sh@21 -- # val= 00:10:28.670 10:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.670 10:05:41 -- accel/accel.sh@20 -- # IFS=: 00:10:28.670 10:05:41 -- accel/accel.sh@20 -- # read -r var val 00:10:28.670 10:05:41 -- accel/accel.sh@21 -- # val= 00:10:28.670 10:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.670 10:05:41 -- accel/accel.sh@20 -- # IFS=: 00:10:28.670 10:05:41 -- accel/accel.sh@20 -- # read -r var val 00:10:28.670 10:05:41 -- accel/accel.sh@21 -- # val= 00:10:28.670 10:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.670 10:05:41 -- accel/accel.sh@20 -- # IFS=: 00:10:28.670 10:05:41 -- accel/accel.sh@20 -- # read -r var val 00:10:28.670 10:05:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:28.670 10:05:41 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:28.670 10:05:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:28.670 00:10:28.670 real 0m2.747s 00:10:28.670 user 0m2.459s 00:10:28.670 sys 0m0.287s 00:10:28.670 10:05:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.670 10:05:41 -- common/autotest_common.sh@10 -- # set +x 00:10:28.670 ************************************ 00:10:28.670 END TEST accel_dif_generate 00:10:28.671 ************************************ 00:10:28.671 10:05:41 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:28.671 10:05:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:28.671 10:05:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:28.671 10:05:41 -- common/autotest_common.sh@10 -- # set +x 00:10:28.671 ************************************ 00:10:28.671 START TEST accel_dif_generate_copy 00:10:28.671 ************************************ 00:10:28.671 10:05:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:10:28.671 10:05:41 -- accel/accel.sh@16 -- # local accel_opc 00:10:28.671 10:05:41 -- accel/accel.sh@17 -- # local accel_module 00:10:28.671 10:05:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:28.671 10:05:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:28.671 10:05:41 -- accel/accel.sh@12 -- # build_accel_config 00:10:28.671 10:05:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:28.671 10:05:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.671 10:05:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.671 10:05:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:28.671 10:05:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:28.671 10:05:41 -- accel/accel.sh@41 -- # local IFS=, 00:10:28.671 10:05:41 -- accel/accel.sh@42 -- # jq -r . 00:10:28.671 [2024-04-24 10:05:41.763206] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:28.671 [2024-04-24 10:05:41.763269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163105 ] 00:10:28.671 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.671 [2024-04-24 10:05:41.833868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.671 [2024-04-24 10:05:41.911908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.084 10:05:43 -- accel/accel.sh@18 -- # out=' 00:10:30.084 SPDK Configuration: 00:10:30.084 Core mask: 0x1 00:10:30.084 00:10:30.084 Accel Perf Configuration: 00:10:30.084 Workload Type: dif_generate_copy 00:10:30.084 Vector size: 4096 bytes 00:10:30.084 Transfer size: 4096 bytes 00:10:30.084 Vector count 1 00:10:30.084 Module: software 00:10:30.084 Queue depth: 32 00:10:30.084 Allocate depth: 32 00:10:30.084 # threads/core: 1 00:10:30.084 Run time: 1 seconds 00:10:30.084 Verify: No 00:10:30.084 00:10:30.084 Running for 1 seconds... 00:10:30.084 00:10:30.084 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:30.084 ------------------------------------------------------------------------------------ 00:10:30.084 0,0 214880/s 852 MiB/s 0 0 00:10:30.084 ==================================================================================== 00:10:30.084 Total 214880/s 839 MiB/s 0 0' 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.084 10:05:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:30.084 10:05:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:30.084 10:05:43 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.084 10:05:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.084 10:05:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.084 10:05:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.084 10:05:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.084 10:05:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.084 10:05:43 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.084 10:05:43 -- accel/accel.sh@42 -- # jq -r . 00:10:30.084 [2024-04-24 10:05:43.107356] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:30.084 [2024-04-24 10:05:43.107449] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163287 ] 00:10:30.084 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.084 [2024-04-24 10:05:43.183497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.084 [2024-04-24 10:05:43.265386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.084 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.084 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.084 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.084 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.084 10:05:43 -- accel/accel.sh@21 -- # val=0x1 00:10:30.084 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.084 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.084 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.084 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.084 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.084 10:05:43 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:30.084 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.084 10:05:43 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.084 10:05:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:30.084 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.084 10:05:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:30.084 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.084 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.084 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.084 10:05:43 -- accel/accel.sh@21 -- # val=software 00:10:30.084 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.084 10:05:43 -- accel/accel.sh@23 -- # accel_module=software 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.084 10:05:43 -- accel/accel.sh@21 -- # val=32 00:10:30.084 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.084 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.085 10:05:43 -- accel/accel.sh@21 -- # val=32 00:10:30.085 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.085 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.085 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.085 10:05:43 -- accel/accel.sh@21 -- # val=1 00:10:30.085 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.085 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.085 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.085 10:05:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:30.085 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.085 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.085 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.085 10:05:43 -- accel/accel.sh@21 -- # val=No 00:10:30.085 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.085 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.085 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.085 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.085 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.085 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.085 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.085 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.085 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.085 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.085 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:31.509 10:05:44 -- accel/accel.sh@21 -- # val= 00:10:31.509 10:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.509 10:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:31.509 10:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:31.509 10:05:44 -- accel/accel.sh@21 -- # val= 00:10:31.509 10:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.509 10:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:31.509 10:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:31.509 10:05:44 -- accel/accel.sh@21 -- # val= 00:10:31.509 10:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.509 10:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:31.509 10:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:31.509 10:05:44 -- accel/accel.sh@21 -- # val= 00:10:31.509 10:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.509 10:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:31.509 10:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:31.509 10:05:44 -- accel/accel.sh@21 -- # val= 00:10:31.509 10:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.509 10:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:31.509 10:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:31.509 10:05:44 -- accel/accel.sh@21 -- # val= 00:10:31.509 10:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.509 10:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:31.509 10:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:31.509 10:05:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:31.509 10:05:44 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:31.509 10:05:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:31.509 00:10:31.509 real 0m2.703s 00:10:31.509 user 0m2.434s 00:10:31.509 sys 0m0.267s 00:10:31.509 10:05:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.509 10:05:44 -- common/autotest_common.sh@10 -- # set +x 00:10:31.509 ************************************ 00:10:31.509 END TEST accel_dif_generate_copy 00:10:31.509 ************************************ 00:10:31.509 10:05:44 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:31.509 10:05:44 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:31.509 10:05:44 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:31.509 10:05:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:31.509 10:05:44 -- common/autotest_common.sh@10 -- # set +x 00:10:31.509 ************************************ 00:10:31.509 START TEST accel_comp 00:10:31.509 ************************************ 00:10:31.509 10:05:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:31.509 10:05:44 -- accel/accel.sh@16 -- # local accel_opc 00:10:31.509 10:05:44 -- accel/accel.sh@17 -- # local accel_module 00:10:31.509 10:05:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:31.509 10:05:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:31.509 10:05:44 -- accel/accel.sh@12 -- # build_accel_config 00:10:31.509 10:05:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:31.509 10:05:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:31.509 10:05:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.509 10:05:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:31.509 10:05:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:31.509 10:05:44 -- accel/accel.sh@41 -- # local IFS=, 00:10:31.509 10:05:44 -- accel/accel.sh@42 -- # jq -r . 00:10:31.509 [2024-04-24 10:05:44.519005] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:31.509 [2024-04-24 10:05:44.519121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163520 ] 00:10:31.509 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.509 [2024-04-24 10:05:44.596516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.509 [2024-04-24 10:05:44.678547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.885 10:05:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:32.885 00:10:32.885 SPDK Configuration: 00:10:32.885 Core mask: 0x1 00:10:32.885 00:10:32.885 Accel Perf Configuration: 00:10:32.885 Workload Type: compress 00:10:32.885 Transfer size: 4096 bytes 00:10:32.885 Vector count 1 00:10:32.885 Module: software 00:10:32.885 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:32.885 Queue depth: 32 00:10:32.885 Allocate depth: 32 00:10:32.885 # threads/core: 1 00:10:32.885 Run time: 1 seconds 00:10:32.885 Verify: No 00:10:32.885 00:10:32.885 Running for 1 seconds... 00:10:32.885 00:10:32.885 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:32.885 ------------------------------------------------------------------------------------ 00:10:32.885 0,0 68288/s 284 MiB/s 0 0 00:10:32.885 ==================================================================================== 00:10:32.885 Total 68288/s 266 MiB/s 0 0' 00:10:32.885 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.885 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.885 10:05:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:32.886 10:05:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:32.886 10:05:45 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.886 10:05:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.886 10:05:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.886 10:05:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.886 10:05:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.886 10:05:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.886 10:05:45 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.886 10:05:45 -- accel/accel.sh@42 -- # jq -r . 00:10:32.886 [2024-04-24 10:05:45.895934] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:32.886 [2024-04-24 10:05:45.896032] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163709 ] 00:10:32.886 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.886 [2024-04-24 10:05:45.971303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.886 [2024-04-24 10:05:46.053068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val=0x1 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val=compress 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val=software 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@23 -- # accel_module=software 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val=32 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val=32 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val=1 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val=No 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:32.886 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:34.261 10:05:47 -- accel/accel.sh@21 -- # val= 00:10:34.261 10:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.261 10:05:47 -- accel/accel.sh@20 -- # IFS=: 00:10:34.261 10:05:47 -- accel/accel.sh@20 -- # read -r var val 00:10:34.261 10:05:47 -- accel/accel.sh@21 -- # val= 00:10:34.261 10:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.261 10:05:47 -- accel/accel.sh@20 -- # IFS=: 00:10:34.261 10:05:47 -- accel/accel.sh@20 -- # read -r var val 00:10:34.261 10:05:47 -- accel/accel.sh@21 -- # val= 00:10:34.261 10:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.261 10:05:47 -- accel/accel.sh@20 -- # IFS=: 00:10:34.261 10:05:47 -- accel/accel.sh@20 -- # read -r var val 00:10:34.261 10:05:47 -- accel/accel.sh@21 -- # val= 00:10:34.261 10:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.261 10:05:47 -- accel/accel.sh@20 -- # IFS=: 00:10:34.261 10:05:47 -- accel/accel.sh@20 -- # read -r var val 00:10:34.261 10:05:47 -- accel/accel.sh@21 -- # val= 00:10:34.261 10:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.261 10:05:47 -- accel/accel.sh@20 -- # IFS=: 00:10:34.261 10:05:47 -- accel/accel.sh@20 -- # read -r var val 00:10:34.261 10:05:47 -- accel/accel.sh@21 -- # val= 00:10:34.261 10:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.261 10:05:47 -- accel/accel.sh@20 -- # IFS=: 00:10:34.261 10:05:47 -- accel/accel.sh@20 -- # read -r var val 00:10:34.261 10:05:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:34.261 10:05:47 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:34.261 10:05:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:34.261 00:10:34.261 real 0m2.755s 00:10:34.261 user 0m2.462s 00:10:34.261 sys 0m0.290s 00:10:34.261 10:05:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.261 10:05:47 -- common/autotest_common.sh@10 -- # set +x 00:10:34.261 ************************************ 00:10:34.261 END TEST accel_comp 00:10:34.261 ************************************ 00:10:34.261 10:05:47 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:10:34.261 10:05:47 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:34.261 10:05:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:34.261 10:05:47 -- common/autotest_common.sh@10 -- # set +x 00:10:34.261 ************************************ 00:10:34.261 START TEST accel_decomp 00:10:34.261 ************************************ 00:10:34.261 10:05:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:10:34.261 10:05:47 -- accel/accel.sh@16 -- # local accel_opc 00:10:34.261 10:05:47 -- accel/accel.sh@17 -- # local accel_module 00:10:34.261 10:05:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:10:34.261 10:05:47 -- accel/accel.sh@12 -- # build_accel_config 00:10:34.261 10:05:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:10:34.261 10:05:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:34.261 10:05:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.261 10:05:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.261 10:05:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:34.261 10:05:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:34.261 10:05:47 -- accel/accel.sh@41 -- # local IFS=, 00:10:34.261 10:05:47 -- accel/accel.sh@42 -- # jq -r . 00:10:34.261 [2024-04-24 10:05:47.317562] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:34.261 [2024-04-24 10:05:47.317658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163935 ] 00:10:34.261 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.261 [2024-04-24 10:05:47.393243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.261 [2024-04-24 10:05:47.474758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.635 10:05:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:35.635 00:10:35.635 SPDK Configuration: 00:10:35.635 Core mask: 0x1 00:10:35.635 00:10:35.635 Accel Perf Configuration: 00:10:35.635 Workload Type: decompress 00:10:35.635 Transfer size: 4096 bytes 00:10:35.635 Vector count 1 00:10:35.635 Module: software 00:10:35.635 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:35.635 Queue depth: 32 00:10:35.635 Allocate depth: 32 00:10:35.635 # threads/core: 1 00:10:35.635 Run time: 1 seconds 00:10:35.635 Verify: Yes 00:10:35.635 00:10:35.635 Running for 1 seconds... 00:10:35.635 00:10:35.635 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:35.635 ------------------------------------------------------------------------------------ 00:10:35.635 0,0 94400/s 173 MiB/s 0 0 00:10:35.635 ==================================================================================== 00:10:35.635 Total 94400/s 368 MiB/s 0 0' 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.635 10:05:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:10:35.635 10:05:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:10:35.635 10:05:48 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.635 10:05:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.635 10:05:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.635 10:05:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.635 10:05:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.635 10:05:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.635 10:05:48 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.635 10:05:48 -- accel/accel.sh@42 -- # jq -r . 00:10:35.635 [2024-04-24 10:05:48.692748] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:35.635 [2024-04-24 10:05:48.692847] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164132 ] 00:10:35.635 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.635 [2024-04-24 10:05:48.767446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.635 [2024-04-24 10:05:48.848661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.635 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:35.635 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.635 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:35.635 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.635 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:35.635 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.635 10:05:48 -- accel/accel.sh@21 -- # val=0x1 00:10:35.635 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.635 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:35.635 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.635 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:35.635 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.635 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.635 10:05:48 -- accel/accel.sh@21 -- # val=decompress 00:10:35.636 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.636 10:05:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.636 10:05:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:35.636 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.636 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:35.636 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.636 10:05:48 -- accel/accel.sh@21 -- # val=software 00:10:35.636 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.636 10:05:48 -- accel/accel.sh@23 -- # accel_module=software 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.636 10:05:48 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:35.636 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.636 10:05:48 -- accel/accel.sh@21 -- # val=32 00:10:35.636 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.636 10:05:48 -- accel/accel.sh@21 -- # val=32 00:10:35.636 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.636 10:05:48 -- accel/accel.sh@21 -- # val=1 00:10:35.636 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.636 10:05:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:35.636 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.636 10:05:48 -- accel/accel.sh@21 -- # val=Yes 00:10:35.636 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.636 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:35.636 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:35.636 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:35.636 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:35.636 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:37.008 10:05:50 -- accel/accel.sh@21 -- # val= 00:10:37.008 10:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.008 10:05:50 -- accel/accel.sh@20 -- # IFS=: 00:10:37.008 10:05:50 -- accel/accel.sh@20 -- # read -r var val 00:10:37.008 10:05:50 -- accel/accel.sh@21 -- # val= 00:10:37.008 10:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.008 10:05:50 -- accel/accel.sh@20 -- # IFS=: 00:10:37.008 10:05:50 -- accel/accel.sh@20 -- # read -r var val 00:10:37.008 10:05:50 -- accel/accel.sh@21 -- # val= 00:10:37.008 10:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.008 10:05:50 -- accel/accel.sh@20 -- # IFS=: 00:10:37.008 10:05:50 -- accel/accel.sh@20 -- # read -r var val 00:10:37.008 10:05:50 -- accel/accel.sh@21 -- # val= 00:10:37.008 10:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.008 10:05:50 -- accel/accel.sh@20 -- # IFS=: 00:10:37.008 10:05:50 -- accel/accel.sh@20 -- # read -r var val 00:10:37.008 10:05:50 -- accel/accel.sh@21 -- # val= 00:10:37.008 10:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.008 10:05:50 -- accel/accel.sh@20 -- # IFS=: 00:10:37.008 10:05:50 -- accel/accel.sh@20 -- # read -r var val 00:10:37.008 10:05:50 -- accel/accel.sh@21 -- # val= 00:10:37.008 10:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.008 10:05:50 -- accel/accel.sh@20 -- # IFS=: 00:10:37.008 10:05:50 -- accel/accel.sh@20 -- # read -r var val 00:10:37.008 10:05:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:37.008 10:05:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:37.008 10:05:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:37.008 00:10:37.008 real 0m2.752s 00:10:37.008 user 0m2.467s 00:10:37.008 sys 0m0.283s 00:10:37.008 10:05:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.008 10:05:50 -- common/autotest_common.sh@10 -- # set +x 00:10:37.008 ************************************ 00:10:37.008 END TEST accel_decomp 00:10:37.008 ************************************ 00:10:37.008 10:05:50 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:37.008 10:05:50 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:37.008 10:05:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:37.008 10:05:50 -- common/autotest_common.sh@10 -- # set +x 00:10:37.008 ************************************ 00:10:37.008 START TEST accel_decmop_full 00:10:37.008 ************************************ 00:10:37.008 10:05:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:37.008 10:05:50 -- accel/accel.sh@16 -- # local accel_opc 00:10:37.008 10:05:50 -- accel/accel.sh@17 -- # local accel_module 00:10:37.008 10:05:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:37.008 10:05:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:37.008 10:05:50 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.008 10:05:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.008 10:05:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.008 10:05:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.008 10:05:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.008 10:05:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.008 10:05:50 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.008 10:05:50 -- accel/accel.sh@42 -- # jq -r . 00:10:37.008 [2024-04-24 10:05:50.121352] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:37.008 [2024-04-24 10:05:50.121448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164359 ] 00:10:37.008 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.008 [2024-04-24 10:05:50.200787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.008 [2024-04-24 10:05:50.284745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.380 10:05:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:38.380 00:10:38.380 SPDK Configuration: 00:10:38.380 Core mask: 0x1 00:10:38.380 00:10:38.380 Accel Perf Configuration: 00:10:38.380 Workload Type: decompress 00:10:38.380 Transfer size: 111250 bytes 00:10:38.380 Vector count 1 00:10:38.380 Module: software 00:10:38.380 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:38.380 Queue depth: 32 00:10:38.380 Allocate depth: 32 00:10:38.380 # threads/core: 1 00:10:38.380 Run time: 1 seconds 00:10:38.380 Verify: Yes 00:10:38.380 00:10:38.380 Running for 1 seconds... 00:10:38.380 00:10:38.380 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:38.380 ------------------------------------------------------------------------------------ 00:10:38.380 0,0 5728/s 236 MiB/s 0 0 00:10:38.380 ==================================================================================== 00:10:38.380 Total 5728/s 607 MiB/s 0 0' 00:10:38.380 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.380 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.380 10:05:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:38.380 10:05:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:38.380 10:05:51 -- accel/accel.sh@12 -- # build_accel_config 00:10:38.380 10:05:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:38.380 10:05:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.380 10:05:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.380 10:05:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:38.380 10:05:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:38.380 10:05:51 -- accel/accel.sh@41 -- # local IFS=, 00:10:38.380 10:05:51 -- accel/accel.sh@42 -- # jq -r . 00:10:38.380 [2024-04-24 10:05:51.508284] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:38.380 [2024-04-24 10:05:51.508378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164568 ] 00:10:38.380 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.380 [2024-04-24 10:05:51.582980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.638 [2024-04-24 10:05:51.662419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val= 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val= 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val= 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val=0x1 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val= 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val= 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val=decompress 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val= 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val=software 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@23 -- # accel_module=software 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val=32 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val=32 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val=1 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.638 10:05:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:38.638 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.638 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.639 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.639 10:05:51 -- accel/accel.sh@21 -- # val=Yes 00:10:38.639 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.639 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.639 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.639 10:05:51 -- accel/accel.sh@21 -- # val= 00:10:38.639 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.639 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.639 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:38.639 10:05:51 -- accel/accel.sh@21 -- # val= 00:10:38.639 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.639 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:10:38.639 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:10:39.590 10:05:52 -- accel/accel.sh@21 -- # val= 00:10:39.590 10:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.590 10:05:52 -- accel/accel.sh@20 -- # IFS=: 00:10:39.590 10:05:52 -- accel/accel.sh@20 -- # read -r var val 00:10:39.590 10:05:52 -- accel/accel.sh@21 -- # val= 00:10:39.590 10:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.590 10:05:52 -- accel/accel.sh@20 -- # IFS=: 00:10:39.590 10:05:52 -- accel/accel.sh@20 -- # read -r var val 00:10:39.590 10:05:52 -- accel/accel.sh@21 -- # val= 00:10:39.590 10:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.590 10:05:52 -- accel/accel.sh@20 -- # IFS=: 00:10:39.590 10:05:52 -- accel/accel.sh@20 -- # read -r var val 00:10:39.590 10:05:52 -- accel/accel.sh@21 -- # val= 00:10:39.590 10:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.590 10:05:52 -- accel/accel.sh@20 -- # IFS=: 00:10:39.590 10:05:52 -- accel/accel.sh@20 -- # read -r var val 00:10:39.590 10:05:52 -- accel/accel.sh@21 -- # val= 00:10:39.590 10:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.590 10:05:52 -- accel/accel.sh@20 -- # IFS=: 00:10:39.590 10:05:52 -- accel/accel.sh@20 -- # read -r var val 00:10:39.590 10:05:52 -- accel/accel.sh@21 -- # val= 00:10:39.590 10:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.590 10:05:52 -- accel/accel.sh@20 -- # IFS=: 00:10:39.590 10:05:52 -- accel/accel.sh@20 -- # read -r var val 00:10:39.590 10:05:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:39.590 10:05:52 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:39.590 10:05:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:39.590 00:10:39.590 real 0m2.756s 00:10:39.590 user 0m2.469s 00:10:39.590 sys 0m0.283s 00:10:39.590 10:05:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.590 10:05:52 -- common/autotest_common.sh@10 -- # set +x 00:10:39.590 ************************************ 00:10:39.590 END TEST accel_decmop_full 00:10:39.590 ************************************ 00:10:39.848 10:05:52 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:39.848 10:05:52 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:39.848 10:05:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:39.848 10:05:52 -- common/autotest_common.sh@10 -- # set +x 00:10:39.848 ************************************ 00:10:39.848 START TEST accel_decomp_mcore 00:10:39.848 ************************************ 00:10:39.848 10:05:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:39.848 10:05:52 -- accel/accel.sh@16 -- # local accel_opc 00:10:39.848 10:05:52 -- accel/accel.sh@17 -- # local accel_module 00:10:39.848 10:05:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:39.848 10:05:52 -- accel/accel.sh@12 -- # build_accel_config 00:10:39.848 10:05:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:39.848 10:05:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:39.848 10:05:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.848 10:05:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.848 10:05:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:39.848 10:05:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:39.848 10:05:52 -- accel/accel.sh@41 -- # local IFS=, 00:10:39.848 10:05:52 -- accel/accel.sh@42 -- # jq -r . 00:10:39.848 [2024-04-24 10:05:52.918549] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:39.848 [2024-04-24 10:05:52.918643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164772 ] 00:10:39.848 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.848 [2024-04-24 10:05:52.993655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.848 [2024-04-24 10:05:53.076066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.848 [2024-04-24 10:05:53.076147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.848 [2024-04-24 10:05:53.076226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.848 [2024-04-24 10:05:53.076228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.220 10:05:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:41.220 00:10:41.220 SPDK Configuration: 00:10:41.220 Core mask: 0xf 00:10:41.220 00:10:41.220 Accel Perf Configuration: 00:10:41.220 Workload Type: decompress 00:10:41.220 Transfer size: 4096 bytes 00:10:41.220 Vector count 1 00:10:41.220 Module: software 00:10:41.220 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:41.220 Queue depth: 32 00:10:41.220 Allocate depth: 32 00:10:41.220 # threads/core: 1 00:10:41.220 Run time: 1 seconds 00:10:41.220 Verify: Yes 00:10:41.220 00:10:41.220 Running for 1 seconds... 00:10:41.220 00:10:41.220 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:41.220 ------------------------------------------------------------------------------------ 00:10:41.220 0,0 73952/s 136 MiB/s 0 0 00:10:41.220 3,0 76096/s 140 MiB/s 0 0 00:10:41.220 2,0 76064/s 140 MiB/s 0 0 00:10:41.220 1,0 76160/s 140 MiB/s 0 0 00:10:41.220 ==================================================================================== 00:10:41.220 Total 302272/s 1180 MiB/s 0 0' 00:10:41.220 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.220 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.220 10:05:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:41.220 10:05:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:41.220 10:05:54 -- accel/accel.sh@12 -- # build_accel_config 00:10:41.220 10:05:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:41.220 10:05:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:41.220 10:05:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.220 10:05:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:41.220 10:05:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:41.220 10:05:54 -- accel/accel.sh@41 -- # local IFS=, 00:10:41.220 10:05:54 -- accel/accel.sh@42 -- # jq -r . 00:10:41.220 [2024-04-24 10:05:54.290655] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:41.220 [2024-04-24 10:05:54.290752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164954 ] 00:10:41.220 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.220 [2024-04-24 10:05:54.367220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.220 [2024-04-24 10:05:54.450999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.220 [2024-04-24 10:05:54.451092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.220 [2024-04-24 10:05:54.451129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.220 [2024-04-24 10:05:54.451131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.478 10:05:54 -- accel/accel.sh@21 -- # val= 00:10:41.478 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.478 10:05:54 -- accel/accel.sh@21 -- # val= 00:10:41.478 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.478 10:05:54 -- accel/accel.sh@21 -- # val= 00:10:41.478 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.478 10:05:54 -- accel/accel.sh@21 -- # val=0xf 00:10:41.478 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.478 10:05:54 -- accel/accel.sh@21 -- # val= 00:10:41.478 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.478 10:05:54 -- accel/accel.sh@21 -- # val= 00:10:41.478 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.478 10:05:54 -- accel/accel.sh@21 -- # val=decompress 00:10:41.478 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.478 10:05:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.478 10:05:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:41.478 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.478 10:05:54 -- accel/accel.sh@21 -- # val= 00:10:41.478 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.478 10:05:54 -- accel/accel.sh@21 -- # val=software 00:10:41.478 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.478 10:05:54 -- accel/accel.sh@23 -- # accel_module=software 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.478 10:05:54 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:41.478 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.478 10:05:54 -- accel/accel.sh@21 -- # val=32 00:10:41.478 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.478 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.478 10:05:54 -- accel/accel.sh@21 -- # val=32 00:10:41.478 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.479 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.479 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.479 10:05:54 -- accel/accel.sh@21 -- # val=1 00:10:41.479 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.479 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.479 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.479 10:05:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:41.479 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.479 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.479 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.479 10:05:54 -- accel/accel.sh@21 -- # val=Yes 00:10:41.479 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.479 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.479 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.479 10:05:54 -- accel/accel.sh@21 -- # val= 00:10:41.479 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.479 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.479 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:41.479 10:05:54 -- accel/accel.sh@21 -- # val= 00:10:41.479 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.479 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:10:41.479 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:10:42.412 10:05:55 -- accel/accel.sh@21 -- # val= 00:10:42.412 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:10:42.412 10:05:55 -- accel/accel.sh@21 -- # val= 00:10:42.412 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:10:42.412 10:05:55 -- accel/accel.sh@21 -- # val= 00:10:42.412 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:10:42.412 10:05:55 -- accel/accel.sh@21 -- # val= 00:10:42.412 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:10:42.412 10:05:55 -- accel/accel.sh@21 -- # val= 00:10:42.412 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:10:42.412 10:05:55 -- accel/accel.sh@21 -- # val= 00:10:42.412 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:10:42.412 10:05:55 -- accel/accel.sh@21 -- # val= 00:10:42.412 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:10:42.412 10:05:55 -- accel/accel.sh@21 -- # val= 00:10:42.412 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:10:42.412 10:05:55 -- accel/accel.sh@21 -- # val= 00:10:42.412 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:10:42.412 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:10:42.412 10:05:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:42.412 10:05:55 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:42.412 10:05:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:42.412 00:10:42.412 real 0m2.767s 00:10:42.412 user 0m9.189s 00:10:42.412 sys 0m0.308s 00:10:42.412 10:05:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.412 10:05:55 -- common/autotest_common.sh@10 -- # set +x 00:10:42.412 ************************************ 00:10:42.412 END TEST accel_decomp_mcore 00:10:42.412 ************************************ 00:10:42.671 10:05:55 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:42.671 10:05:55 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:42.671 10:05:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:42.671 10:05:55 -- common/autotest_common.sh@10 -- # set +x 00:10:42.671 ************************************ 00:10:42.671 START TEST accel_decomp_full_mcore 00:10:42.671 ************************************ 00:10:42.671 10:05:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:42.671 10:05:55 -- accel/accel.sh@16 -- # local accel_opc 00:10:42.671 10:05:55 -- accel/accel.sh@17 -- # local accel_module 00:10:42.671 10:05:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:42.671 10:05:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:42.671 10:05:55 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.671 10:05:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.671 10:05:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.671 10:05:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.671 10:05:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.671 10:05:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.671 10:05:55 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.671 10:05:55 -- accel/accel.sh@42 -- # jq -r . 00:10:42.671 [2024-04-24 10:05:55.736436] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:42.671 [2024-04-24 10:05:55.736536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165158 ] 00:10:42.671 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.671 [2024-04-24 10:05:55.816698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.671 [2024-04-24 10:05:55.903860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.671 [2024-04-24 10:05:55.903945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.671 [2024-04-24 10:05:55.904025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.671 [2024-04-24 10:05:55.904027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.046 10:05:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:44.046 00:10:44.046 SPDK Configuration: 00:10:44.046 Core mask: 0xf 00:10:44.046 00:10:44.046 Accel Perf Configuration: 00:10:44.046 Workload Type: decompress 00:10:44.046 Transfer size: 111250 bytes 00:10:44.046 Vector count 1 00:10:44.046 Module: software 00:10:44.046 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:44.046 Queue depth: 32 00:10:44.046 Allocate depth: 32 00:10:44.046 # threads/core: 1 00:10:44.046 Run time: 1 seconds 00:10:44.046 Verify: Yes 00:10:44.046 00:10:44.046 Running for 1 seconds... 00:10:44.046 00:10:44.046 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:44.046 ------------------------------------------------------------------------------------ 00:10:44.046 0,0 5568/s 230 MiB/s 0 0 00:10:44.046 3,0 5600/s 231 MiB/s 0 0 00:10:44.046 2,0 5600/s 231 MiB/s 0 0 00:10:44.046 1,0 5600/s 231 MiB/s 0 0 00:10:44.046 ==================================================================================== 00:10:44.046 Total 22368/s 2373 MiB/s 0 0' 00:10:44.046 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.046 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.046 10:05:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:44.046 10:05:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:44.046 10:05:57 -- accel/accel.sh@12 -- # build_accel_config 00:10:44.046 10:05:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:44.046 10:05:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.046 10:05:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.046 10:05:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:44.046 10:05:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:44.046 10:05:57 -- accel/accel.sh@41 -- # local IFS=, 00:10:44.046 10:05:57 -- accel/accel.sh@42 -- # jq -r . 00:10:44.046 [2024-04-24 10:05:57.143288] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:44.046 [2024-04-24 10:05:57.143382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165342 ] 00:10:44.046 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.046 [2024-04-24 10:05:57.219838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.046 [2024-04-24 10:05:57.303543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.046 [2024-04-24 10:05:57.303631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.046 [2024-04-24 10:05:57.303708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.046 [2024-04-24 10:05:57.303709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.305 10:05:57 -- accel/accel.sh@21 -- # val= 00:10:44.305 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.305 10:05:57 -- accel/accel.sh@21 -- # val= 00:10:44.305 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.305 10:05:57 -- accel/accel.sh@21 -- # val= 00:10:44.305 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.305 10:05:57 -- accel/accel.sh@21 -- # val=0xf 00:10:44.305 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.305 10:05:57 -- accel/accel.sh@21 -- # val= 00:10:44.305 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.305 10:05:57 -- accel/accel.sh@21 -- # val= 00:10:44.305 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.305 10:05:57 -- accel/accel.sh@21 -- # val=decompress 00:10:44.305 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.305 10:05:57 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.305 10:05:57 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:44.305 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.305 10:05:57 -- accel/accel.sh@21 -- # val= 00:10:44.305 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.305 10:05:57 -- accel/accel.sh@21 -- # val=software 00:10:44.305 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.305 10:05:57 -- accel/accel.sh@23 -- # accel_module=software 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.305 10:05:57 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:44.305 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.305 10:05:57 -- accel/accel.sh@21 -- # val=32 00:10:44.305 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.305 10:05:57 -- accel/accel.sh@21 -- # val=32 00:10:44.305 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.305 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.306 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.306 10:05:57 -- accel/accel.sh@21 -- # val=1 00:10:44.306 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.306 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.306 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.306 10:05:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:44.306 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.306 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.306 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.306 10:05:57 -- accel/accel.sh@21 -- # val=Yes 00:10:44.306 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.306 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.306 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.306 10:05:57 -- accel/accel.sh@21 -- # val= 00:10:44.306 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.306 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.306 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:44.306 10:05:57 -- accel/accel.sh@21 -- # val= 00:10:44.306 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.306 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:10:44.306 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:10:45.241 10:05:58 -- accel/accel.sh@21 -- # val= 00:10:45.241 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.241 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:10:45.241 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:10:45.241 10:05:58 -- accel/accel.sh@21 -- # val= 00:10:45.499 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:10:45.499 10:05:58 -- accel/accel.sh@21 -- # val= 00:10:45.499 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:10:45.499 10:05:58 -- accel/accel.sh@21 -- # val= 00:10:45.499 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:10:45.499 10:05:58 -- accel/accel.sh@21 -- # val= 00:10:45.499 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:10:45.499 10:05:58 -- accel/accel.sh@21 -- # val= 00:10:45.499 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:10:45.499 10:05:58 -- accel/accel.sh@21 -- # val= 00:10:45.499 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:10:45.499 10:05:58 -- accel/accel.sh@21 -- # val= 00:10:45.499 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:10:45.499 10:05:58 -- accel/accel.sh@21 -- # val= 00:10:45.499 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:10:45.499 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:10:45.499 10:05:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:45.499 10:05:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:45.499 10:05:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:45.499 00:10:45.499 real 0m2.815s 00:10:45.499 user 0m9.312s 00:10:45.499 sys 0m0.314s 00:10:45.499 10:05:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.499 10:05:58 -- common/autotest_common.sh@10 -- # set +x 00:10:45.499 ************************************ 00:10:45.499 END TEST accel_decomp_full_mcore 00:10:45.499 ************************************ 00:10:45.499 10:05:58 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:45.499 10:05:58 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:45.499 10:05:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:45.499 10:05:58 -- common/autotest_common.sh@10 -- # set +x 00:10:45.499 ************************************ 00:10:45.499 START TEST accel_decomp_mthread 00:10:45.499 ************************************ 00:10:45.499 10:05:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:45.499 10:05:58 -- accel/accel.sh@16 -- # local accel_opc 00:10:45.499 10:05:58 -- accel/accel.sh@17 -- # local accel_module 00:10:45.499 10:05:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:45.499 10:05:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:45.499 10:05:58 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.499 10:05:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.499 10:05:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.499 10:05:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.499 10:05:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.499 10:05:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.499 10:05:58 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.499 10:05:58 -- accel/accel.sh@42 -- # jq -r . 00:10:45.499 [2024-04-24 10:05:58.599507] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:45.499 [2024-04-24 10:05:58.599594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165540 ] 00:10:45.499 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.499 [2024-04-24 10:05:58.675797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.499 [2024-04-24 10:05:58.760416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.875 10:05:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:46.875 00:10:46.875 SPDK Configuration: 00:10:46.875 Core mask: 0x1 00:10:46.875 00:10:46.875 Accel Perf Configuration: 00:10:46.875 Workload Type: decompress 00:10:46.875 Transfer size: 4096 bytes 00:10:46.875 Vector count 1 00:10:46.875 Module: software 00:10:46.875 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:46.875 Queue depth: 32 00:10:46.875 Allocate depth: 32 00:10:46.875 # threads/core: 2 00:10:46.875 Run time: 1 seconds 00:10:46.875 Verify: Yes 00:10:46.875 00:10:46.875 Running for 1 seconds... 00:10:46.875 00:10:46.875 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:46.875 ------------------------------------------------------------------------------------ 00:10:46.875 0,1 47520/s 87 MiB/s 0 0 00:10:46.875 0,0 47360/s 87 MiB/s 0 0 00:10:46.875 ==================================================================================== 00:10:46.875 Total 94880/s 370 MiB/s 0 0' 00:10:46.875 10:05:59 -- accel/accel.sh@20 -- # IFS=: 00:10:46.875 10:05:59 -- accel/accel.sh@20 -- # read -r var val 00:10:46.875 10:05:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:46.875 10:05:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:46.875 10:05:59 -- accel/accel.sh@12 -- # build_accel_config 00:10:46.875 10:05:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:46.875 10:05:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.875 10:05:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.875 10:05:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:46.875 10:05:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:46.875 10:05:59 -- accel/accel.sh@41 -- # local IFS=, 00:10:46.875 10:05:59 -- accel/accel.sh@42 -- # jq -r . 00:10:46.875 [2024-04-24 10:05:59.981021] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:46.875 [2024-04-24 10:05:59.981124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165724 ] 00:10:46.875 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.875 [2024-04-24 10:06:00.063544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.875 [2024-04-24 10:06:00.147468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val= 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val= 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val= 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val=0x1 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val= 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val= 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val=decompress 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val= 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val=software 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@23 -- # accel_module=software 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val=32 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val=32 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val=2 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val=Yes 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val= 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:47.134 10:06:00 -- accel/accel.sh@21 -- # val= 00:10:47.134 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:10:47.134 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:10:48.070 10:06:01 -- accel/accel.sh@21 -- # val= 00:10:48.070 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.070 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:10:48.070 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:10:48.070 10:06:01 -- accel/accel.sh@21 -- # val= 00:10:48.070 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.070 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:10:48.070 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:10:48.070 10:06:01 -- accel/accel.sh@21 -- # val= 00:10:48.070 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.070 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:10:48.070 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:10:48.070 10:06:01 -- accel/accel.sh@21 -- # val= 00:10:48.070 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.070 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:10:48.070 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:10:48.070 10:06:01 -- accel/accel.sh@21 -- # val= 00:10:48.070 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.070 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:10:48.070 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:10:48.328 10:06:01 -- accel/accel.sh@21 -- # val= 00:10:48.328 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.328 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:10:48.328 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:10:48.328 10:06:01 -- accel/accel.sh@21 -- # val= 00:10:48.328 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.328 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:10:48.328 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:10:48.328 10:06:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:48.328 10:06:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:48.328 10:06:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:48.328 00:10:48.328 real 0m2.773s 00:10:48.328 user 0m2.478s 00:10:48.328 sys 0m0.301s 00:10:48.328 10:06:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.328 10:06:01 -- common/autotest_common.sh@10 -- # set +x 00:10:48.328 ************************************ 00:10:48.328 END TEST accel_decomp_mthread 00:10:48.328 ************************************ 00:10:48.328 10:06:01 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:48.328 10:06:01 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:48.328 10:06:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:48.328 10:06:01 -- common/autotest_common.sh@10 -- # set +x 00:10:48.328 ************************************ 00:10:48.328 START TEST accel_deomp_full_mthread 00:10:48.328 ************************************ 00:10:48.328 10:06:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:48.328 10:06:01 -- accel/accel.sh@16 -- # local accel_opc 00:10:48.328 10:06:01 -- accel/accel.sh@17 -- # local accel_module 00:10:48.328 10:06:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:48.328 10:06:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:48.328 10:06:01 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.328 10:06:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.328 10:06:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.328 10:06:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.328 10:06:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.328 10:06:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.328 10:06:01 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.328 10:06:01 -- accel/accel.sh@42 -- # jq -r . 00:10:48.328 [2024-04-24 10:06:01.409922] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:48.328 [2024-04-24 10:06:01.409986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166019 ] 00:10:48.328 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.328 [2024-04-24 10:06:01.481341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.328 [2024-04-24 10:06:01.559501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.702 10:06:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:49.702 00:10:49.702 SPDK Configuration: 00:10:49.702 Core mask: 0x1 00:10:49.702 00:10:49.702 Accel Perf Configuration: 00:10:49.702 Workload Type: decompress 00:10:49.702 Transfer size: 111250 bytes 00:10:49.702 Vector count 1 00:10:49.702 Module: software 00:10:49.702 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:49.702 Queue depth: 32 00:10:49.702 Allocate depth: 32 00:10:49.702 # threads/core: 2 00:10:49.702 Run time: 1 seconds 00:10:49.702 Verify: Yes 00:10:49.702 00:10:49.702 Running for 1 seconds... 00:10:49.702 00:10:49.702 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:49.702 ------------------------------------------------------------------------------------ 00:10:49.702 0,1 2912/s 120 MiB/s 0 0 00:10:49.702 0,0 2912/s 120 MiB/s 0 0 00:10:49.702 ==================================================================================== 00:10:49.702 Total 5824/s 617 MiB/s 0 0' 00:10:49.702 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.702 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.702 10:06:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:49.702 10:06:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:49.702 10:06:02 -- accel/accel.sh@12 -- # build_accel_config 00:10:49.702 10:06:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:49.702 10:06:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.702 10:06:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.702 10:06:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:49.702 10:06:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:49.702 10:06:02 -- accel/accel.sh@41 -- # local IFS=, 00:10:49.702 10:06:02 -- accel/accel.sh@42 -- # jq -r . 00:10:49.702 [2024-04-24 10:06:02.778352] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:49.702 [2024-04-24 10:06:02.778446] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166229 ] 00:10:49.702 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.702 [2024-04-24 10:06:02.852386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.702 [2024-04-24 10:06:02.931142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val= 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val= 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val= 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val=0x1 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val= 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val= 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val=decompress 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val= 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val=software 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@23 -- # accel_module=software 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val=32 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val=32 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val=2 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val=Yes 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val= 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:49.961 10:06:02 -- accel/accel.sh@21 -- # val= 00:10:49.961 10:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # IFS=: 00:10:49.961 10:06:02 -- accel/accel.sh@20 -- # read -r var val 00:10:50.896 10:06:04 -- accel/accel.sh@21 -- # val= 00:10:50.896 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:10:50.896 10:06:04 -- accel/accel.sh@21 -- # val= 00:10:50.896 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:10:50.896 10:06:04 -- accel/accel.sh@21 -- # val= 00:10:50.896 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:10:50.896 10:06:04 -- accel/accel.sh@21 -- # val= 00:10:50.896 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:10:50.896 10:06:04 -- accel/accel.sh@21 -- # val= 00:10:50.896 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:10:50.896 10:06:04 -- accel/accel.sh@21 -- # val= 00:10:50.896 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:10:50.896 10:06:04 -- accel/accel.sh@21 -- # val= 00:10:50.896 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:10:50.896 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:10:50.896 10:06:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:50.896 10:06:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:50.896 10:06:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:50.896 00:10:50.896 real 0m2.747s 00:10:50.896 user 0m2.481s 00:10:50.896 sys 0m0.269s 00:10:50.896 10:06:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.896 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:10:50.896 ************************************ 00:10:50.896 END TEST accel_deomp_full_mthread 00:10:50.896 ************************************ 00:10:51.155 10:06:04 -- accel/accel.sh@116 -- # [[ n == y ]] 00:10:51.155 10:06:04 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:51.155 10:06:04 -- accel/accel.sh@129 -- # build_accel_config 00:10:51.155 10:06:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:51.155 10:06:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:51.155 10:06:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:51.155 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:10:51.155 10:06:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:51.155 10:06:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:51.155 10:06:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:51.155 10:06:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:51.155 10:06:04 -- accel/accel.sh@41 -- # local IFS=, 00:10:51.155 10:06:04 -- accel/accel.sh@42 -- # jq -r . 00:10:51.155 ************************************ 00:10:51.155 START TEST accel_dif_functional_tests 00:10:51.155 ************************************ 00:10:51.155 10:06:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:51.155 [2024-04-24 10:06:04.219005] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:51.155 [2024-04-24 10:06:04.219105] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166526 ] 00:10:51.155 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.155 [2024-04-24 10:06:04.295770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:51.155 [2024-04-24 10:06:04.380966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.155 [2024-04-24 10:06:04.381055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.155 [2024-04-24 10:06:04.381057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.414 00:10:51.414 00:10:51.414 CUnit - A unit testing framework for C - Version 2.1-3 00:10:51.414 http://cunit.sourceforge.net/ 00:10:51.414 00:10:51.414 00:10:51.414 Suite: accel_dif 00:10:51.414 Test: verify: DIF generated, GUARD check ...passed 00:10:51.414 Test: verify: DIF generated, APPTAG check ...passed 00:10:51.414 Test: verify: DIF generated, REFTAG check ...passed 00:10:51.414 Test: verify: DIF not generated, GUARD check ...[2024-04-24 10:06:04.460538] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:51.414 [2024-04-24 10:06:04.460591] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:51.414 passed 00:10:51.414 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 10:06:04.460623] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:51.414 [2024-04-24 10:06:04.460642] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:51.414 passed 00:10:51.414 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 10:06:04.460663] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:51.414 [2024-04-24 10:06:04.460683] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:51.414 passed 00:10:51.414 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:51.414 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 10:06:04.460726] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:51.414 passed 00:10:51.414 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:51.414 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:51.414 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:51.414 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 10:06:04.460830] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:51.414 passed 00:10:51.414 Test: generate copy: DIF generated, GUARD check ...passed 00:10:51.414 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:51.414 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:51.414 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:51.414 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:51.414 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:51.414 Test: generate copy: iovecs-len validate ...[2024-04-24 10:06:04.461010] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:51.414 passed 00:10:51.414 Test: generate copy: buffer alignment validate ...passed 00:10:51.414 00:10:51.414 Run Summary: Type Total Ran Passed Failed Inactive 00:10:51.414 suites 1 1 n/a 0 0 00:10:51.414 tests 20 20 20 0 0 00:10:51.414 asserts 204 204 204 0 n/a 00:10:51.414 00:10:51.414 Elapsed time = 0.000 seconds 00:10:51.414 00:10:51.414 real 0m0.449s 00:10:51.414 user 0m0.678s 00:10:51.414 sys 0m0.177s 00:10:51.414 10:06:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.414 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:10:51.414 ************************************ 00:10:51.414 END TEST accel_dif_functional_tests 00:10:51.414 ************************************ 00:10:51.414 00:10:51.414 real 0m58.658s 00:10:51.414 user 1m5.750s 00:10:51.414 sys 0m7.637s 00:10:51.414 10:06:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.414 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:10:51.414 ************************************ 00:10:51.414 END TEST accel 00:10:51.414 ************************************ 00:10:51.673 10:06:04 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:51.673 10:06:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:51.673 10:06:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:51.673 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:10:51.673 ************************************ 00:10:51.673 START TEST accel_rpc 00:10:51.673 ************************************ 00:10:51.673 10:06:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:51.673 * Looking for test storage... 00:10:51.673 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:10:51.673 10:06:04 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:51.673 10:06:04 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1166821 00:10:51.673 10:06:04 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:51.673 10:06:04 -- accel/accel_rpc.sh@15 -- # waitforlisten 1166821 00:10:51.673 10:06:04 -- common/autotest_common.sh@819 -- # '[' -z 1166821 ']' 00:10:51.673 10:06:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.673 10:06:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:51.673 10:06:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.674 10:06:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:51.674 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:10:51.674 [2024-04-24 10:06:04.849358] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:51.674 [2024-04-24 10:06:04.849454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166821 ] 00:10:51.674 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.674 [2024-04-24 10:06:04.929206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.932 [2024-04-24 10:06:05.010019] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:51.932 [2024-04-24 10:06:05.010141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.499 10:06:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:52.499 10:06:05 -- common/autotest_common.sh@852 -- # return 0 00:10:52.499 10:06:05 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:52.499 10:06:05 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:52.499 10:06:05 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:52.499 10:06:05 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:52.499 10:06:05 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:52.499 10:06:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:52.499 10:06:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:52.499 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:10:52.499 ************************************ 00:10:52.499 START TEST accel_assign_opcode 00:10:52.499 ************************************ 00:10:52.499 10:06:05 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:10:52.499 10:06:05 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:52.499 10:06:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.499 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:10:52.499 [2024-04-24 10:06:05.672111] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:52.499 10:06:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.499 10:06:05 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:52.499 10:06:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.499 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:10:52.499 [2024-04-24 10:06:05.680121] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:52.499 10:06:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.499 10:06:05 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:52.499 10:06:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.499 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:10:52.758 10:06:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.758 10:06:05 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:52.758 10:06:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.758 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:10:52.758 10:06:05 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:52.758 10:06:05 -- accel/accel_rpc.sh@42 -- # grep software 00:10:52.758 10:06:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.758 software 00:10:52.758 00:10:52.758 real 0m0.258s 00:10:52.758 user 0m0.038s 00:10:52.758 sys 0m0.013s 00:10:52.758 10:06:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.758 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:10:52.758 ************************************ 00:10:52.758 END TEST accel_assign_opcode 00:10:52.758 ************************************ 00:10:52.758 10:06:05 -- accel/accel_rpc.sh@55 -- # killprocess 1166821 00:10:52.758 10:06:05 -- common/autotest_common.sh@926 -- # '[' -z 1166821 ']' 00:10:52.758 10:06:05 -- common/autotest_common.sh@930 -- # kill -0 1166821 00:10:52.758 10:06:05 -- common/autotest_common.sh@931 -- # uname 00:10:52.758 10:06:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:52.758 10:06:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1166821 00:10:52.758 10:06:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:52.758 10:06:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:52.758 10:06:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1166821' 00:10:52.758 killing process with pid 1166821 00:10:52.758 10:06:06 -- common/autotest_common.sh@945 -- # kill 1166821 00:10:52.758 10:06:06 -- common/autotest_common.sh@950 -- # wait 1166821 00:10:53.325 00:10:53.325 real 0m1.579s 00:10:53.325 user 0m1.579s 00:10:53.325 sys 0m0.461s 00:10:53.325 10:06:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.325 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:10:53.325 ************************************ 00:10:53.325 END TEST accel_rpc 00:10:53.325 ************************************ 00:10:53.325 10:06:06 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:10:53.325 10:06:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:53.325 10:06:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:53.325 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:10:53.325 ************************************ 00:10:53.325 START TEST app_cmdline 00:10:53.325 ************************************ 00:10:53.325 10:06:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:10:53.325 * Looking for test storage... 00:10:53.325 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:10:53.325 10:06:06 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:53.325 10:06:06 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1167305 00:10:53.325 10:06:06 -- app/cmdline.sh@18 -- # waitforlisten 1167305 00:10:53.325 10:06:06 -- common/autotest_common.sh@819 -- # '[' -z 1167305 ']' 00:10:53.325 10:06:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.325 10:06:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:53.325 10:06:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.325 10:06:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:53.325 10:06:06 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:53.325 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:10:53.325 [2024-04-24 10:06:06.453439] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:53.325 [2024-04-24 10:06:06.453535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167305 ] 00:10:53.325 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.325 [2024-04-24 10:06:06.528569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.584 [2024-04-24 10:06:06.617702] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:53.584 [2024-04-24 10:06:06.617811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.151 10:06:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:54.151 10:06:07 -- common/autotest_common.sh@852 -- # return 0 00:10:54.151 10:06:07 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:54.151 { 00:10:54.151 "version": "SPDK v24.01.1-pre git sha1 36faa8c312b", 00:10:54.151 "fields": { 00:10:54.151 "major": 24, 00:10:54.151 "minor": 1, 00:10:54.151 "patch": 1, 00:10:54.151 "suffix": "-pre", 00:10:54.151 "commit": "36faa8c312b" 00:10:54.151 } 00:10:54.151 } 00:10:54.151 10:06:07 -- app/cmdline.sh@22 -- # expected_methods=() 00:10:54.151 10:06:07 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:54.151 10:06:07 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:54.151 10:06:07 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:54.419 10:06:07 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:54.419 10:06:07 -- app/cmdline.sh@26 -- # sort 00:10:54.419 10:06:07 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:54.419 10:06:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:54.419 10:06:07 -- common/autotest_common.sh@10 -- # set +x 00:10:54.419 10:06:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:54.419 10:06:07 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:54.419 10:06:07 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:54.419 10:06:07 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:54.419 10:06:07 -- common/autotest_common.sh@640 -- # local es=0 00:10:54.419 10:06:07 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:54.419 10:06:07 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:10:54.419 10:06:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:54.419 10:06:07 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:10:54.419 10:06:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:54.419 10:06:07 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:10:54.419 10:06:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:54.419 10:06:07 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:10:54.419 10:06:07 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:10:54.419 10:06:07 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:54.419 request: 00:10:54.419 { 00:10:54.419 "method": "env_dpdk_get_mem_stats", 00:10:54.419 "req_id": 1 00:10:54.419 } 00:10:54.419 Got JSON-RPC error response 00:10:54.419 response: 00:10:54.419 { 00:10:54.419 "code": -32601, 00:10:54.419 "message": "Method not found" 00:10:54.419 } 00:10:54.419 10:06:07 -- common/autotest_common.sh@643 -- # es=1 00:10:54.419 10:06:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:54.419 10:06:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:54.419 10:06:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:54.419 10:06:07 -- app/cmdline.sh@1 -- # killprocess 1167305 00:10:54.419 10:06:07 -- common/autotest_common.sh@926 -- # '[' -z 1167305 ']' 00:10:54.419 10:06:07 -- common/autotest_common.sh@930 -- # kill -0 1167305 00:10:54.419 10:06:07 -- common/autotest_common.sh@931 -- # uname 00:10:54.419 10:06:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:54.419 10:06:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1167305 00:10:54.728 10:06:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:54.729 10:06:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:54.729 10:06:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1167305' 00:10:54.729 killing process with pid 1167305 00:10:54.729 10:06:07 -- common/autotest_common.sh@945 -- # kill 1167305 00:10:54.729 10:06:07 -- common/autotest_common.sh@950 -- # wait 1167305 00:10:54.999 00:10:55.000 real 0m1.671s 00:10:55.000 user 0m1.934s 00:10:55.000 sys 0m0.486s 00:10:55.000 10:06:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.000 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:10:55.000 ************************************ 00:10:55.000 END TEST app_cmdline 00:10:55.000 ************************************ 00:10:55.000 10:06:08 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:10:55.000 10:06:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:55.000 10:06:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:55.000 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:10:55.000 ************************************ 00:10:55.000 START TEST version 00:10:55.000 ************************************ 00:10:55.000 10:06:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:10:55.000 * Looking for test storage... 00:10:55.000 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:10:55.000 10:06:08 -- app/version.sh@17 -- # get_header_version major 00:10:55.000 10:06:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:10:55.000 10:06:08 -- app/version.sh@14 -- # cut -f2 00:10:55.000 10:06:08 -- app/version.sh@14 -- # tr -d '"' 00:10:55.000 10:06:08 -- app/version.sh@17 -- # major=24 00:10:55.000 10:06:08 -- app/version.sh@18 -- # get_header_version minor 00:10:55.000 10:06:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:10:55.000 10:06:08 -- app/version.sh@14 -- # cut -f2 00:10:55.000 10:06:08 -- app/version.sh@14 -- # tr -d '"' 00:10:55.000 10:06:08 -- app/version.sh@18 -- # minor=1 00:10:55.000 10:06:08 -- app/version.sh@19 -- # get_header_version patch 00:10:55.000 10:06:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:10:55.000 10:06:08 -- app/version.sh@14 -- # cut -f2 00:10:55.000 10:06:08 -- app/version.sh@14 -- # tr -d '"' 00:10:55.000 10:06:08 -- app/version.sh@19 -- # patch=1 00:10:55.000 10:06:08 -- app/version.sh@20 -- # get_header_version suffix 00:10:55.000 10:06:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:10:55.000 10:06:08 -- app/version.sh@14 -- # cut -f2 00:10:55.000 10:06:08 -- app/version.sh@14 -- # tr -d '"' 00:10:55.000 10:06:08 -- app/version.sh@20 -- # suffix=-pre 00:10:55.000 10:06:08 -- app/version.sh@22 -- # version=24.1 00:10:55.000 10:06:08 -- app/version.sh@25 -- # (( patch != 0 )) 00:10:55.000 10:06:08 -- app/version.sh@25 -- # version=24.1.1 00:10:55.000 10:06:08 -- app/version.sh@28 -- # version=24.1.1rc0 00:10:55.000 10:06:08 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:10:55.000 10:06:08 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:55.000 10:06:08 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:10:55.000 10:06:08 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:10:55.000 00:10:55.000 real 0m0.178s 00:10:55.000 user 0m0.081s 00:10:55.000 sys 0m0.138s 00:10:55.000 10:06:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.000 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:10:55.000 ************************************ 00:10:55.000 END TEST version 00:10:55.000 ************************************ 00:10:55.259 10:06:08 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@204 -- # uname -s 00:10:55.259 10:06:08 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:10:55.259 10:06:08 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:10:55.259 10:06:08 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:10:55.259 10:06:08 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@268 -- # timing_exit lib 00:10:55.259 10:06:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:55.259 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:10:55.259 10:06:08 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:10:55.259 10:06:08 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:10:55.259 10:06:08 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:10:55.259 10:06:08 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:10:55.259 10:06:08 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:10:55.260 10:06:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:55.260 10:06:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:55.260 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:10:55.260 ************************************ 00:10:55.260 START TEST llvm_fuzz 00:10:55.260 ************************************ 00:10:55.260 10:06:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:10:55.260 * Looking for test storage... 00:10:55.260 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:10:55.260 10:06:08 -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:10:55.260 10:06:08 -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:10:55.260 10:06:08 -- common/autotest_common.sh@538 -- # fuzzers=() 00:10:55.260 10:06:08 -- common/autotest_common.sh@538 -- # local fuzzers 00:10:55.260 10:06:08 -- common/autotest_common.sh@540 -- # [[ -n '' ]] 00:10:55.260 10:06:08 -- common/autotest_common.sh@543 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:10:55.260 10:06:08 -- common/autotest_common.sh@544 -- # fuzzers=("${fuzzers[@]##*/}") 00:10:55.260 10:06:08 -- common/autotest_common.sh@547 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:10:55.260 10:06:08 -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:10:55.260 10:06:08 -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:10:55.260 10:06:08 -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:10:55.260 10:06:08 -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:10:55.260 10:06:08 -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:10:55.260 10:06:08 -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:10:55.260 10:06:08 -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:10:55.260 10:06:08 -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:10:55.260 10:06:08 -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:10:55.260 10:06:08 -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:10:55.260 10:06:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:55.260 10:06:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:55.260 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:10:55.260 ************************************ 00:10:55.260 START TEST nvmf_fuzz 00:10:55.260 ************************************ 00:10:55.260 10:06:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:10:55.521 * Looking for test storage... 00:10:55.521 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:10:55.521 10:06:08 -- nvmf/run.sh@52 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:10:55.521 10:06:08 -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:10:55.521 10:06:08 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:55.521 10:06:08 -- common/autotest_common.sh@34 -- # set -e 00:10:55.521 10:06:08 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:55.521 10:06:08 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:55.521 10:06:08 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:55.521 10:06:08 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:10:55.521 10:06:08 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:55.521 10:06:08 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:55.521 10:06:08 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:55.521 10:06:08 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:55.521 10:06:08 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:55.521 10:06:08 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:55.521 10:06:08 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:55.521 10:06:08 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:55.521 10:06:08 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:55.521 10:06:08 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:55.521 10:06:08 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:55.521 10:06:08 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:55.521 10:06:08 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:55.521 10:06:08 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:55.521 10:06:08 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:55.521 10:06:08 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:55.521 10:06:08 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:55.521 10:06:08 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:55.521 10:06:08 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:10:55.521 10:06:08 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:55.521 10:06:08 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:55.521 10:06:08 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:55.521 10:06:08 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:55.521 10:06:08 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:55.521 10:06:08 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:55.522 10:06:08 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:55.522 10:06:08 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:55.522 10:06:08 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:55.522 10:06:08 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:55.522 10:06:08 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:55.522 10:06:08 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:55.522 10:06:08 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:55.522 10:06:08 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:55.522 10:06:08 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:10:55.522 10:06:08 -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:10:55.522 10:06:08 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:10:55.522 10:06:08 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:55.522 10:06:08 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:55.522 10:06:08 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:55.522 10:06:08 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:55.522 10:06:08 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:55.522 10:06:08 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:55.522 10:06:08 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:55.522 10:06:08 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:55.522 10:06:08 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:55.522 10:06:08 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:10:55.522 10:06:08 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:10:55.522 10:06:08 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:55.522 10:06:08 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:10:55.522 10:06:08 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:10:55.522 10:06:08 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:10:55.522 10:06:08 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:10:55.522 10:06:08 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:10:55.522 10:06:08 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:10:55.522 10:06:08 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:10:55.522 10:06:08 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:10:55.522 10:06:08 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:10:55.522 10:06:08 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:10:55.522 10:06:08 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:10:55.522 10:06:08 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:10:55.522 10:06:08 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:10:55.522 10:06:08 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:10:55.522 10:06:08 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:10:55.522 10:06:08 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:10:55.522 10:06:08 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:10:55.522 10:06:08 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:55.522 10:06:08 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:10:55.522 10:06:08 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:10:55.522 10:06:08 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:10:55.522 10:06:08 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:10:55.522 10:06:08 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:10:55.522 10:06:08 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:10:55.522 10:06:08 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:10:55.522 10:06:08 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:10:55.522 10:06:08 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:10:55.522 10:06:08 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:10:55.522 10:06:08 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:55.522 10:06:08 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:10:55.522 10:06:08 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:10:55.522 10:06:08 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:10:55.522 10:06:08 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:10:55.522 10:06:08 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:10:55.522 10:06:08 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:10:55.522 10:06:08 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:55.522 10:06:08 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:10:55.522 10:06:08 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:10:55.522 10:06:08 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:10:55.522 10:06:08 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:55.522 10:06:08 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:55.522 10:06:08 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:55.522 10:06:08 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:55.522 10:06:08 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:55.522 10:06:08 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:55.522 10:06:08 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:10:55.522 10:06:08 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:55.522 #define SPDK_CONFIG_H 00:10:55.522 #define SPDK_CONFIG_APPS 1 00:10:55.522 #define SPDK_CONFIG_ARCH native 00:10:55.522 #undef SPDK_CONFIG_ASAN 00:10:55.522 #undef SPDK_CONFIG_AVAHI 00:10:55.522 #undef SPDK_CONFIG_CET 00:10:55.522 #define SPDK_CONFIG_COVERAGE 1 00:10:55.522 #define SPDK_CONFIG_CROSS_PREFIX 00:10:55.522 #undef SPDK_CONFIG_CRYPTO 00:10:55.522 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:55.522 #undef SPDK_CONFIG_CUSTOMOCF 00:10:55.522 #undef SPDK_CONFIG_DAOS 00:10:55.522 #define SPDK_CONFIG_DAOS_DIR 00:10:55.522 #define SPDK_CONFIG_DEBUG 1 00:10:55.522 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:55.522 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:10:55.522 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:55.522 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:55.522 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:55.522 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:10:55.522 #define SPDK_CONFIG_EXAMPLES 1 00:10:55.522 #undef SPDK_CONFIG_FC 00:10:55.522 #define SPDK_CONFIG_FC_PATH 00:10:55.522 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:55.522 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:55.522 #undef SPDK_CONFIG_FUSE 00:10:55.522 #define SPDK_CONFIG_FUZZER 1 00:10:55.522 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:10:55.522 #undef SPDK_CONFIG_GOLANG 00:10:55.522 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:55.522 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:55.522 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:55.522 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:55.522 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:55.522 #define SPDK_CONFIG_IDXD 1 00:10:55.522 #undef SPDK_CONFIG_IDXD_KERNEL 00:10:55.522 #undef SPDK_CONFIG_IPSEC_MB 00:10:55.522 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:55.522 #define SPDK_CONFIG_ISAL 1 00:10:55.522 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:55.522 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:55.522 #define SPDK_CONFIG_LIBDIR 00:10:55.522 #undef SPDK_CONFIG_LTO 00:10:55.522 #define SPDK_CONFIG_MAX_LCORES 00:10:55.522 #define SPDK_CONFIG_NVME_CUSE 1 00:10:55.522 #undef SPDK_CONFIG_OCF 00:10:55.522 #define SPDK_CONFIG_OCF_PATH 00:10:55.522 #define SPDK_CONFIG_OPENSSL_PATH 00:10:55.522 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:55.522 #undef SPDK_CONFIG_PGO_USE 00:10:55.522 #define SPDK_CONFIG_PREFIX /usr/local 00:10:55.522 #undef SPDK_CONFIG_RAID5F 00:10:55.522 #undef SPDK_CONFIG_RBD 00:10:55.522 #define SPDK_CONFIG_RDMA 1 00:10:55.522 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:55.522 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:55.522 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:55.522 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:55.522 #undef SPDK_CONFIG_SHARED 00:10:55.522 #undef SPDK_CONFIG_SMA 00:10:55.522 #define SPDK_CONFIG_TESTS 1 00:10:55.522 #undef SPDK_CONFIG_TSAN 00:10:55.522 #define SPDK_CONFIG_UBLK 1 00:10:55.522 #define SPDK_CONFIG_UBSAN 1 00:10:55.522 #undef SPDK_CONFIG_UNIT_TESTS 00:10:55.522 #undef SPDK_CONFIG_URING 00:10:55.522 #define SPDK_CONFIG_URING_PATH 00:10:55.522 #undef SPDK_CONFIG_URING_ZNS 00:10:55.522 #undef SPDK_CONFIG_USDT 00:10:55.522 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:55.522 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:55.522 #define SPDK_CONFIG_VFIO_USER 1 00:10:55.522 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:55.522 #define SPDK_CONFIG_VHOST 1 00:10:55.522 #define SPDK_CONFIG_VIRTIO 1 00:10:55.522 #undef SPDK_CONFIG_VTUNE 00:10:55.522 #define SPDK_CONFIG_VTUNE_DIR 00:10:55.522 #define SPDK_CONFIG_WERROR 1 00:10:55.522 #define SPDK_CONFIG_WPDK_DIR 00:10:55.522 #undef SPDK_CONFIG_XNVME 00:10:55.522 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:55.522 10:06:08 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:55.522 10:06:08 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:55.522 10:06:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.522 10:06:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.522 10:06:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.522 10:06:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.522 10:06:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.523 10:06:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.523 10:06:08 -- paths/export.sh@5 -- # export PATH 00:10:55.523 10:06:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.523 10:06:08 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:10:55.523 10:06:08 -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:10:55.523 10:06:08 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:10:55.523 10:06:08 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:10:55.523 10:06:08 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:55.523 10:06:08 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:55.523 10:06:08 -- pm/common@16 -- # TEST_TAG=N/A 00:10:55.523 10:06:08 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:10:55.523 10:06:08 -- common/autotest_common.sh@52 -- # : 1 00:10:55.523 10:06:08 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:10:55.523 10:06:08 -- common/autotest_common.sh@56 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:55.523 10:06:08 -- common/autotest_common.sh@58 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:10:55.523 10:06:08 -- common/autotest_common.sh@60 -- # : 1 00:10:55.523 10:06:08 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:55.523 10:06:08 -- common/autotest_common.sh@62 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:10:55.523 10:06:08 -- common/autotest_common.sh@64 -- # : 00:10:55.523 10:06:08 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:10:55.523 10:06:08 -- common/autotest_common.sh@66 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:10:55.523 10:06:08 -- common/autotest_common.sh@68 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:10:55.523 10:06:08 -- common/autotest_common.sh@70 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:10:55.523 10:06:08 -- common/autotest_common.sh@72 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:55.523 10:06:08 -- common/autotest_common.sh@74 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:10:55.523 10:06:08 -- common/autotest_common.sh@76 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:10:55.523 10:06:08 -- common/autotest_common.sh@78 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:10:55.523 10:06:08 -- common/autotest_common.sh@80 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:10:55.523 10:06:08 -- common/autotest_common.sh@82 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:10:55.523 10:06:08 -- common/autotest_common.sh@84 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:10:55.523 10:06:08 -- common/autotest_common.sh@86 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:10:55.523 10:06:08 -- common/autotest_common.sh@88 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:10:55.523 10:06:08 -- common/autotest_common.sh@90 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:55.523 10:06:08 -- common/autotest_common.sh@92 -- # : 1 00:10:55.523 10:06:08 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:10:55.523 10:06:08 -- common/autotest_common.sh@94 -- # : 1 00:10:55.523 10:06:08 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:10:55.523 10:06:08 -- common/autotest_common.sh@96 -- # : rdma 00:10:55.523 10:06:08 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:55.523 10:06:08 -- common/autotest_common.sh@98 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:10:55.523 10:06:08 -- common/autotest_common.sh@100 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:10:55.523 10:06:08 -- common/autotest_common.sh@102 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:10:55.523 10:06:08 -- common/autotest_common.sh@104 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:10:55.523 10:06:08 -- common/autotest_common.sh@106 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:10:55.523 10:06:08 -- common/autotest_common.sh@108 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:10:55.523 10:06:08 -- common/autotest_common.sh@110 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:10:55.523 10:06:08 -- common/autotest_common.sh@112 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:55.523 10:06:08 -- common/autotest_common.sh@114 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:10:55.523 10:06:08 -- common/autotest_common.sh@116 -- # : 1 00:10:55.523 10:06:08 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:10:55.523 10:06:08 -- common/autotest_common.sh@118 -- # : 00:10:55.523 10:06:08 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:55.523 10:06:08 -- common/autotest_common.sh@120 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:10:55.523 10:06:08 -- common/autotest_common.sh@122 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:10:55.523 10:06:08 -- common/autotest_common.sh@124 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:10:55.523 10:06:08 -- common/autotest_common.sh@126 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:10:55.523 10:06:08 -- common/autotest_common.sh@128 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:10:55.523 10:06:08 -- common/autotest_common.sh@130 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:10:55.523 10:06:08 -- common/autotest_common.sh@132 -- # : 00:10:55.523 10:06:08 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:10:55.523 10:06:08 -- common/autotest_common.sh@134 -- # : true 00:10:55.523 10:06:08 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:10:55.523 10:06:08 -- common/autotest_common.sh@136 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:10:55.523 10:06:08 -- common/autotest_common.sh@138 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:10:55.523 10:06:08 -- common/autotest_common.sh@140 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:10:55.523 10:06:08 -- common/autotest_common.sh@142 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:10:55.523 10:06:08 -- common/autotest_common.sh@144 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:10:55.523 10:06:08 -- common/autotest_common.sh@146 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:10:55.523 10:06:08 -- common/autotest_common.sh@148 -- # : 00:10:55.523 10:06:08 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:10:55.523 10:06:08 -- common/autotest_common.sh@150 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:10:55.523 10:06:08 -- common/autotest_common.sh@152 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:10:55.523 10:06:08 -- common/autotest_common.sh@154 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:10:55.523 10:06:08 -- common/autotest_common.sh@156 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:10:55.523 10:06:08 -- common/autotest_common.sh@158 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:10:55.523 10:06:08 -- common/autotest_common.sh@160 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:10:55.523 10:06:08 -- common/autotest_common.sh@163 -- # : 00:10:55.523 10:06:08 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:10:55.523 10:06:08 -- common/autotest_common.sh@165 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:10:55.523 10:06:08 -- common/autotest_common.sh@167 -- # : 0 00:10:55.523 10:06:08 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:55.523 10:06:08 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:10:55.523 10:06:08 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:10:55.523 10:06:08 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:10:55.523 10:06:08 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:10:55.523 10:06:08 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.523 10:06:08 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.523 10:06:08 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.524 10:06:08 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.524 10:06:08 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:55.524 10:06:08 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:55.524 10:06:08 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:10:55.524 10:06:08 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:10:55.524 10:06:08 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:55.524 10:06:08 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:10:55.524 10:06:08 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:55.524 10:06:08 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:55.524 10:06:08 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:55.524 10:06:08 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:55.524 10:06:08 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:55.524 10:06:08 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:10:55.524 10:06:08 -- common/autotest_common.sh@196 -- # cat 00:10:55.524 10:06:08 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:10:55.524 10:06:08 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:55.524 10:06:08 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:55.524 10:06:08 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:55.524 10:06:08 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:55.524 10:06:08 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:10:55.524 10:06:08 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:10:55.524 10:06:08 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:10:55.524 10:06:08 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:10:55.524 10:06:08 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:10:55.524 10:06:08 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:10:55.524 10:06:08 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:55.524 10:06:08 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:55.524 10:06:08 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:55.524 10:06:08 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:55.524 10:06:08 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:55.524 10:06:08 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:55.524 10:06:08 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:55.524 10:06:08 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:55.524 10:06:08 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:10:55.524 10:06:08 -- common/autotest_common.sh@249 -- # export valgrind= 00:10:55.524 10:06:08 -- common/autotest_common.sh@249 -- # valgrind= 00:10:55.524 10:06:08 -- common/autotest_common.sh@255 -- # uname -s 00:10:55.524 10:06:08 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:10:55.524 10:06:08 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:10:55.524 10:06:08 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:10:55.524 10:06:08 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:10:55.524 10:06:08 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:10:55.524 10:06:08 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:10:55.524 10:06:08 -- common/autotest_common.sh@265 -- # MAKE=make 00:10:55.524 10:06:08 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j72 00:10:55.524 10:06:08 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:10:55.524 10:06:08 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:10:55.524 10:06:08 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:10:55.524 10:06:08 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:10:55.524 10:06:08 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:10:55.524 10:06:08 -- common/autotest_common.sh@309 -- # [[ -z 1167645 ]] 00:10:55.524 10:06:08 -- common/autotest_common.sh@309 -- # kill -0 1167645 00:10:55.524 10:06:08 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:10:55.524 10:06:08 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:10:55.524 10:06:08 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:10:55.524 10:06:08 -- common/autotest_common.sh@322 -- # local mount target_dir 00:10:55.524 10:06:08 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:10:55.524 10:06:08 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:10:55.524 10:06:08 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:10:55.524 10:06:08 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:10:55.524 10:06:08 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.NjZ7Rh 00:10:55.524 10:06:08 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:55.524 10:06:08 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:10:55.524 10:06:08 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:10:55.524 10:06:08 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.NjZ7Rh/tests/nvmf /tmp/spdk.NjZ7Rh 00:10:55.524 10:06:08 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:10:55.524 10:06:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:55.524 10:06:08 -- common/autotest_common.sh@318 -- # df -T 00:10:55.524 10:06:08 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:10:55.524 10:06:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:10:55.524 10:06:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=818380800 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:10:55.524 10:06:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=4466049024 00:10:55.524 10:06:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=87558037504 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=94508572672 00:10:55.524 10:06:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=6950535168 00:10:55.524 10:06:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=47251693568 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47254286336 00:10:55.524 10:06:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:10:55.524 10:06:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=18895835136 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=18901716992 00:10:55.524 10:06:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=5881856 00:10:55.524 10:06:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=47253860352 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47254286336 00:10:55.524 10:06:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=425984 00:10:55.524 10:06:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:10:55.524 10:06:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=9450852352 00:10:55.524 10:06:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9450856448 00:10:55.524 10:06:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:10:55.524 10:06:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:55.524 10:06:08 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:10:55.524 * Looking for test storage... 00:10:55.524 10:06:08 -- common/autotest_common.sh@359 -- # local target_space new_size 00:10:55.524 10:06:08 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:10:55.524 10:06:08 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:10:55.524 10:06:08 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:55.524 10:06:08 -- common/autotest_common.sh@363 -- # mount=/ 00:10:55.525 10:06:08 -- common/autotest_common.sh@365 -- # target_space=87558037504 00:10:55.525 10:06:08 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:10:55.525 10:06:08 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:10:55.525 10:06:08 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:10:55.525 10:06:08 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:10:55.525 10:06:08 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:10:55.525 10:06:08 -- common/autotest_common.sh@372 -- # new_size=9165127680 00:10:55.525 10:06:08 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:55.525 10:06:08 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:10:55.525 10:06:08 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:10:55.525 10:06:08 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:10:55.525 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:10:55.525 10:06:08 -- common/autotest_common.sh@380 -- # return 0 00:10:55.525 10:06:08 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:10:55.525 10:06:08 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:10:55.525 10:06:08 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:55.525 10:06:08 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:55.525 10:06:08 -- common/autotest_common.sh@1672 -- # true 00:10:55.525 10:06:08 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:10:55.525 10:06:08 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:10:55.525 10:06:08 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:10:55.525 10:06:08 -- common/autotest_common.sh@27 -- # exec 00:10:55.525 10:06:08 -- common/autotest_common.sh@29 -- # exec 00:10:55.525 10:06:08 -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:55.525 10:06:08 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:55.525 10:06:08 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:55.525 10:06:08 -- common/autotest_common.sh@18 -- # set -x 00:10:55.525 10:06:08 -- nvmf/run.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:10:55.525 10:06:08 -- ../common.sh@8 -- # pids=() 00:10:55.525 10:06:08 -- nvmf/run.sh@55 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:10:55.525 10:06:08 -- nvmf/run.sh@56 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:10:55.525 10:06:08 -- nvmf/run.sh@56 -- # fuzz_num=25 00:10:55.525 10:06:08 -- nvmf/run.sh@57 -- # (( fuzz_num != 0 )) 00:10:55.525 10:06:08 -- nvmf/run.sh@59 -- # trap 'cleanup /tmp/llvm_fuzz*; exit 1' SIGINT SIGTERM EXIT 00:10:55.525 10:06:08 -- nvmf/run.sh@61 -- # mem_size=512 00:10:55.525 10:06:08 -- nvmf/run.sh@62 -- # [[ 1 -eq 1 ]] 00:10:55.525 10:06:08 -- nvmf/run.sh@63 -- # start_llvm_fuzz_short 25 1 00:10:55.525 10:06:08 -- ../common.sh@69 -- # local fuzz_num=25 00:10:55.525 10:06:08 -- ../common.sh@70 -- # local time=1 00:10:55.525 10:06:08 -- ../common.sh@72 -- # (( i = 0 )) 00:10:55.525 10:06:08 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:55.525 10:06:08 -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:10:55.525 10:06:08 -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:10:55.525 10:06:08 -- nvmf/run.sh@24 -- # local timen=1 00:10:55.525 10:06:08 -- nvmf/run.sh@25 -- # local core=0x1 00:10:55.525 10:06:08 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:10:55.525 10:06:08 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:10:55.525 10:06:08 -- nvmf/run.sh@29 -- # printf %02d 0 00:10:55.525 10:06:08 -- nvmf/run.sh@29 -- # port=4400 00:10:55.525 10:06:08 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:10:55.525 10:06:08 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:10:55.525 10:06:08 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:55.525 10:06:08 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 -r /var/tmp/spdk0.sock 00:10:55.525 [2024-04-24 10:06:08.760822] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:55.525 [2024-04-24 10:06:08.760894] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167753 ] 00:10:55.784 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.784 [2024-04-24 10:06:09.041119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.042 [2024-04-24 10:06:09.128482] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:56.042 [2024-04-24 10:06:09.128616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.042 [2024-04-24 10:06:09.187466] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.042 [2024-04-24 10:06:09.203672] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:10:56.042 INFO: Running with entropic power schedule (0xFF, 100). 00:10:56.042 INFO: Seed: 865640438 00:10:56.042 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:10:56.042 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:10:56.042 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:10:56.043 INFO: A corpus is not provided, starting from an empty corpus 00:10:56.043 #2 INITED exec/s: 0 rss: 60Mb 00:10:56.043 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:56.043 This may also happen if the target rejected all inputs we tried so far 00:10:56.043 [2024-04-24 10:06:09.248409] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:56.043 [2024-04-24 10:06:09.248451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:56.302 NEW_FUNC[1/663]: 0x47fbf0 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:10:56.302 NEW_FUNC[2/663]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:56.302 #7 NEW cov: 11450 ft: 11451 corp: 2/108b lim: 320 exec/s: 0 rss: 68Mb L: 107/107 MS: 5 CopyPart-CMP-ChangeBit-CrossOver-InsertRepeatedBytes- DE: "8\000\000\000\000\000\000\000"- 00:10:56.302 [2024-04-24 10:06:09.579262] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:56.302 [2024-04-24 10:06:09.579312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:56.561 #8 NEW cov: 11563 ft: 12011 corp: 3/215b lim: 320 exec/s: 0 rss: 68Mb L: 107/107 MS: 1 ChangeByte- 00:10:56.561 [2024-04-24 10:06:09.649261] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:56.561 [2024-04-24 10:06:09.649297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:56.561 #9 NEW cov: 11569 ft: 12189 corp: 4/322b lim: 320 exec/s: 0 rss: 68Mb L: 107/107 MS: 1 ShuffleBytes- 00:10:56.561 [2024-04-24 10:06:09.709389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:56.561 [2024-04-24 10:06:09.709420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:56.561 #15 NEW cov: 11654 ft: 12426 corp: 5/429b lim: 320 exec/s: 0 rss: 68Mb L: 107/107 MS: 1 ChangeByte- 00:10:56.561 [2024-04-24 10:06:09.769568] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:56.561 [2024-04-24 10:06:09.769598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:56.561 #16 NEW cov: 11654 ft: 12526 corp: 6/536b lim: 320 exec/s: 0 rss: 68Mb L: 107/107 MS: 1 ChangeBit- 00:10:56.561 [2024-04-24 10:06:09.819751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:56.561 [2024-04-24 10:06:09.819782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:56.561 [2024-04-24 10:06:09.819812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffffff 00:10:56.562 [2024-04-24 10:06:09.819828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:56.820 #17 NEW cov: 11677 ft: 12904 corp: 7/692b lim: 320 exec/s: 0 rss: 68Mb L: 156/156 MS: 1 InsertRepeatedBytes- 00:10:56.820 [2024-04-24 10:06:09.869860] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:56.820 [2024-04-24 10:06:09.869891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:56.820 [2024-04-24 10:06:09.869924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:31313131 cdw11:31313131 SGL TRANSPORT DATA BLOCK TRANSPORT 0x313131313131ffff 00:10:56.820 [2024-04-24 10:06:09.869940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:56.820 NEW_FUNC[1/1]: 0x12d8670 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2014 00:10:56.820 #18 NEW cov: 11708 ft: 13057 corp: 8/869b lim: 320 exec/s: 0 rss: 68Mb L: 177/177 MS: 1 InsertRepeatedBytes- 00:10:56.820 [2024-04-24 10:06:09.940017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:56.820 [2024-04-24 10:06:09.940053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:56.820 #19 NEW cov: 11708 ft: 13162 corp: 9/976b lim: 320 exec/s: 0 rss: 68Mb L: 107/177 MS: 1 CrossOver- 00:10:56.820 [2024-04-24 10:06:09.990282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:56.820 [2024-04-24 10:06:09.990316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:56.820 [2024-04-24 10:06:09.990350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffffff 00:10:56.820 [2024-04-24 10:06:09.990368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:56.820 #20 NEW cov: 11708 ft: 13208 corp: 10/1133b lim: 320 exec/s: 0 rss: 69Mb L: 157/177 MS: 1 InsertByte- 00:10:56.820 [2024-04-24 10:06:10.060456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:56.820 [2024-04-24 10:06:10.060502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.078 #21 NEW cov: 11708 ft: 13262 corp: 11/1240b lim: 320 exec/s: 0 rss: 69Mb L: 107/177 MS: 1 ShuffleBytes- 00:10:57.078 [2024-04-24 10:06:10.130610] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.078 [2024-04-24 10:06:10.130650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.078 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:10:57.078 #22 NEW cov: 11725 ft: 13293 corp: 12/1342b lim: 320 exec/s: 0 rss: 69Mb L: 102/177 MS: 1 EraseBytes- 00:10:57.078 [2024-04-24 10:06:10.180691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (38) qid:0 cid:4 nsid:4000000a cdw10:ffffffff cdw11:ffffffff 00:10:57.078 [2024-04-24 10:06:10.180726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.078 [2024-04-24 10:06:10.180759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:ffffffff 00:10:57.078 [2024-04-24 10:06:10.180775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:57.078 #27 NEW cov: 11726 ft: 13373 corp: 13/1491b lim: 320 exec/s: 0 rss: 69Mb L: 149/177 MS: 5 InsertByte-ChangeByte-EraseBytes-CopyPart-CrossOver- 00:10:57.078 [2024-04-24 10:06:10.240846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.078 [2024-04-24 10:06:10.240879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.078 #28 NEW cov: 11726 ft: 13424 corp: 14/1560b lim: 320 exec/s: 28 rss: 69Mb L: 69/177 MS: 1 EraseBytes- 00:10:57.078 [2024-04-24 10:06:10.291017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.078 [2024-04-24 10:06:10.291049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.078 #29 NEW cov: 11726 ft: 13451 corp: 15/1667b lim: 320 exec/s: 29 rss: 69Mb L: 107/177 MS: 1 ChangeByte- 00:10:57.078 [2024-04-24 10:06:10.341070] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.078 [2024-04-24 10:06:10.341101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.337 #30 NEW cov: 11726 ft: 13475 corp: 16/1774b lim: 320 exec/s: 30 rss: 69Mb L: 107/177 MS: 1 ChangeBit- 00:10:57.337 [2024-04-24 10:06:10.391460] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.337 [2024-04-24 10:06:10.391500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.337 [2024-04-24 10:06:10.391563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffffff 00:10:57.337 [2024-04-24 10:06:10.391583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:57.337 #31 NEW cov: 11726 ft: 13546 corp: 17/1932b lim: 320 exec/s: 31 rss: 69Mb L: 158/177 MS: 1 InsertByte- 00:10:57.337 [2024-04-24 10:06:10.441926] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.337 [2024-04-24 10:06:10.441952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.337 #32 NEW cov: 11726 ft: 13648 corp: 18/2039b lim: 320 exec/s: 32 rss: 69Mb L: 107/177 MS: 1 CrossOver- 00:10:57.337 [2024-04-24 10:06:10.482125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.337 [2024-04-24 10:06:10.482153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.337 [2024-04-24 10:06:10.482220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:45000000 cdw11:ffffff00 00:10:57.337 [2024-04-24 10:06:10.482234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:57.337 #33 NEW cov: 11726 ft: 13662 corp: 19/2196b lim: 320 exec/s: 33 rss: 69Mb L: 157/177 MS: 1 InsertByte- 00:10:57.337 [2024-04-24 10:06:10.522355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (38) qid:0 cid:4 nsid:4000000a cdw10:ffffffff cdw11:ffffffff 00:10:57.337 [2024-04-24 10:06:10.522381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.337 [2024-04-24 10:06:10.522432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:ffffffff 00:10:57.337 [2024-04-24 10:06:10.522446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:57.337 [2024-04-24 10:06:10.522503] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.337 [2024-04-24 10:06:10.522518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:57.337 #34 NEW cov: 11726 ft: 13811 corp: 20/2400b lim: 320 exec/s: 34 rss: 69Mb L: 204/204 MS: 1 CrossOver- 00:10:57.337 [2024-04-24 10:06:10.562350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ff400000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.337 [2024-04-24 10:06:10.562376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.337 [2024-04-24 10:06:10.562435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.337 [2024-04-24 10:06:10.562450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:57.337 #35 NEW cov: 11726 ft: 13835 corp: 21/2580b lim: 320 exec/s: 35 rss: 69Mb L: 180/204 MS: 1 CopyPart- 00:10:57.337 [2024-04-24 10:06:10.602442] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.337 [2024-04-24 10:06:10.602467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.337 [2024-04-24 10:06:10.602524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.337 [2024-04-24 10:06:10.602539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:57.595 #36 NEW cov: 11726 ft: 13859 corp: 22/2712b lim: 320 exec/s: 36 rss: 69Mb L: 132/204 MS: 1 CrossOver- 00:10:57.595 [2024-04-24 10:06:10.642597] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.595 [2024-04-24 10:06:10.642622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.595 [2024-04-24 10:06:10.642672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:10:57.595 [2024-04-24 10:06:10.642685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:57.595 #37 NEW cov: 11726 ft: 13898 corp: 23/2868b lim: 320 exec/s: 37 rss: 69Mb L: 156/204 MS: 1 CopyPart- 00:10:57.595 [2024-04-24 10:06:10.682609] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.595 [2024-04-24 10:06:10.682637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.595 #38 NEW cov: 11726 ft: 13962 corp: 24/2988b lim: 320 exec/s: 38 rss: 69Mb L: 120/204 MS: 1 InsertRepeatedBytes- 00:10:57.595 [2024-04-24 10:06:10.722804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (38) qid:0 cid:4 nsid:a cdw10:ffffffff cdw11:ffffffff 00:10:57.595 [2024-04-24 10:06:10.722829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.595 [2024-04-24 10:06:10.722878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:ffffffff 00:10:57.595 [2024-04-24 10:06:10.722892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:57.595 #39 NEW cov: 11726 ft: 13983 corp: 25/3137b lim: 320 exec/s: 39 rss: 69Mb L: 149/204 MS: 1 ChangeBinInt- 00:10:57.595 [2024-04-24 10:06:10.762791] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.595 [2024-04-24 10:06:10.762816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.595 #40 NEW cov: 11726 ft: 14051 corp: 26/3206b lim: 320 exec/s: 40 rss: 69Mb L: 69/204 MS: 1 ShuffleBytes- 00:10:57.595 [2024-04-24 10:06:10.802927] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.595 [2024-04-24 10:06:10.802952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.595 #41 NEW cov: 11726 ft: 14059 corp: 27/3313b lim: 320 exec/s: 41 rss: 69Mb L: 107/204 MS: 1 ShuffleBytes- 00:10:57.595 [2024-04-24 10:06:10.843088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (85) qid:0 cid:4 nsid:85858585 cdw10:85858585 cdw11:85858585 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:57.595 [2024-04-24 10:06:10.843113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.595 NEW_FUNC[1/1]: 0x16d87d0 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:10:57.595 #44 NEW cov: 11739 ft: 14375 corp: 28/3379b lim: 320 exec/s: 44 rss: 69Mb L: 66/204 MS: 3 ChangeBit-ChangeBit-InsertRepeatedBytes- 00:10:57.853 [2024-04-24 10:06:10.883137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.853 [2024-04-24 10:06:10.883162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.853 #45 NEW cov: 11739 ft: 14408 corp: 29/3486b lim: 320 exec/s: 45 rss: 69Mb L: 107/204 MS: 1 ChangeBinInt- 00:10:57.853 [2024-04-24 10:06:10.923339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (38) qid:0 cid:4 nsid:a cdw10:ffffffff cdw11:ffffffff 00:10:57.853 [2024-04-24 10:06:10.923364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.853 [2024-04-24 10:06:10.923413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:ffffffff 00:10:57.853 [2024-04-24 10:06:10.923427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:57.853 #46 NEW cov: 11739 ft: 14411 corp: 30/3635b lim: 320 exec/s: 46 rss: 69Mb L: 149/204 MS: 1 ShuffleBytes- 00:10:57.853 [2024-04-24 10:06:10.963342] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.853 [2024-04-24 10:06:10.963367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.853 #47 NEW cov: 11739 ft: 14429 corp: 31/3737b lim: 320 exec/s: 47 rss: 69Mb L: 102/204 MS: 1 ChangeBinInt- 00:10:57.853 [2024-04-24 10:06:11.003440] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.853 [2024-04-24 10:06:11.003467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.853 #48 NEW cov: 11739 ft: 14434 corp: 32/3839b lim: 320 exec/s: 48 rss: 70Mb L: 102/204 MS: 1 PersAutoDict- DE: "8\000\000\000\000\000\000\000"- 00:10:57.853 [2024-04-24 10:06:11.043589] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.853 [2024-04-24 10:06:11.043616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.853 #49 NEW cov: 11739 ft: 14449 corp: 33/3946b lim: 320 exec/s: 49 rss: 70Mb L: 107/204 MS: 1 ChangeByte- 00:10:57.853 [2024-04-24 10:06:11.083860] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.853 [2024-04-24 10:06:11.083885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:57.853 [2024-04-24 10:06:11.083943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (6f) qid:0 cid:5 nsid:6f6f6f6f cdw10:6f6f6f6f cdw11:6f6f6f6f SGL TRANSPORT DATA BLOCK TRANSPORT 0x6f6f6f6f6f6f6f6f 00:10:57.853 [2024-04-24 10:06:11.083958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:57.853 [2024-04-24 10:06:11.084012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (6f) qid:0 cid:6 nsid:6f6f6f6f cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.853 [2024-04-24 10:06:11.084027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:57.853 #50 NEW cov: 11739 ft: 14511 corp: 34/4139b lim: 320 exec/s: 50 rss: 70Mb L: 193/204 MS: 1 InsertRepeatedBytes- 00:10:57.853 [2024-04-24 10:06:11.123826] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:57.853 [2024-04-24 10:06:11.123852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:58.111 #51 NEW cov: 11746 ft: 14525 corp: 35/4208b lim: 320 exec/s: 51 rss: 70Mb L: 69/204 MS: 1 CopyPart- 00:10:58.111 [2024-04-24 10:06:11.164063] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ff400000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:58.111 [2024-04-24 10:06:11.164088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:58.111 [2024-04-24 10:06:11.164145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:58.111 [2024-04-24 10:06:11.164160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:58.111 #52 NEW cov: 11746 ft: 14534 corp: 36/4388b lim: 320 exec/s: 52 rss: 70Mb L: 180/204 MS: 1 ChangeBit- 00:10:58.111 [2024-04-24 10:06:11.204075] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:58.111 [2024-04-24 10:06:11.204100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:58.111 [2024-04-24 10:06:11.244203] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:10:58.111 [2024-04-24 10:06:11.244228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:58.111 #54 NEW cov: 11746 ft: 14586 corp: 37/4495b lim: 320 exec/s: 27 rss: 70Mb L: 107/204 MS: 2 PersAutoDict-ChangeASCIIInt- DE: "8\000\000\000\000\000\000\000"- 00:10:58.111 #54 DONE cov: 11746 ft: 14586 corp: 37/4495b lim: 320 exec/s: 27 rss: 70Mb 00:10:58.111 ###### Recommended dictionary. ###### 00:10:58.111 "8\000\000\000\000\000\000\000" # Uses: 2 00:10:58.111 ###### End of recommended dictionary. ###### 00:10:58.111 Done 54 runs in 2 second(s) 00:10:58.370 10:06:11 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_0.conf 00:10:58.370 10:06:11 -- ../common.sh@72 -- # (( i++ )) 00:10:58.370 10:06:11 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:58.370 10:06:11 -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:10:58.370 10:06:11 -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:10:58.370 10:06:11 -- nvmf/run.sh@24 -- # local timen=1 00:10:58.370 10:06:11 -- nvmf/run.sh@25 -- # local core=0x1 00:10:58.370 10:06:11 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:10:58.370 10:06:11 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:10:58.370 10:06:11 -- nvmf/run.sh@29 -- # printf %02d 1 00:10:58.370 10:06:11 -- nvmf/run.sh@29 -- # port=4401 00:10:58.370 10:06:11 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:10:58.370 10:06:11 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:10:58.370 10:06:11 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:58.370 10:06:11 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 -r /var/tmp/spdk1.sock 00:10:58.370 [2024-04-24 10:06:11.447162] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:58.370 [2024-04-24 10:06:11.447245] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168139 ] 00:10:58.370 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.628 [2024-04-24 10:06:11.721522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.628 [2024-04-24 10:06:11.814273] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:58.628 [2024-04-24 10:06:11.814404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.628 [2024-04-24 10:06:11.872837] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.628 [2024-04-24 10:06:11.889037] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:10:58.628 INFO: Running with entropic power schedule (0xFF, 100). 00:10:58.628 INFO: Seed: 3551648136 00:10:58.885 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:10:58.885 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:10:58.886 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:10:58.886 INFO: A corpus is not provided, starting from an empty corpus 00:10:58.886 #2 INITED exec/s: 0 rss: 61Mb 00:10:58.886 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:58.886 This may also happen if the target rejected all inputs we tried so far 00:10:58.886 [2024-04-24 10:06:11.943741] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:10:58.886 [2024-04-24 10:06:11.943956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.886 [2024-04-24 10:06:11.943983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:58.886 [2024-04-24 10:06:11.944015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.886 [2024-04-24 10:06:11.944035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:58.886 [2024-04-24 10:06:11.944075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.886 [2024-04-24 10:06:11.944091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:59.143 NEW_FUNC[1/664]: 0x4804f0 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:10:59.143 NEW_FUNC[2/664]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:59.143 #4 NEW cov: 11559 ft: 11560 corp: 2/22b lim: 30 exec/s: 0 rss: 67Mb L: 21/21 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:10:59.143 [2024-04-24 10:06:12.274644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.143 [2024-04-24 10:06:12.274692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.143 #7 NEW cov: 11678 ft: 12425 corp: 3/30b lim: 30 exec/s: 0 rss: 68Mb L: 8/21 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:10:59.143 [2024-04-24 10:06:12.334511] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (50180) > buf size (4096) 00:10:59.143 [2024-04-24 10:06:12.334650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:31000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.143 [2024-04-24 10:06:12.334674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.143 #11 NEW cov: 11684 ft: 12795 corp: 4/37b lim: 30 exec/s: 0 rss: 68Mb L: 7/21 MS: 4 ChangeByte-CopyPart-ChangeByte-InsertRepeatedBytes- 00:10:59.143 [2024-04-24 10:06:12.384694] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:10:59.143 [2024-04-24 10:06:12.384768] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (227332) > buf size (4096) 00:10:59.143 [2024-04-24 10:06:12.384918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.143 [2024-04-24 10:06:12.384939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.143 [2024-04-24 10:06:12.384971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:de000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.143 [2024-04-24 10:06:12.384986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:59.143 [2024-04-24 10:06:12.385013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.143 [2024-04-24 10:06:12.385030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:59.401 #12 NEW cov: 11769 ft: 13114 corp: 5/58b lim: 30 exec/s: 0 rss: 68Mb L: 21/21 MS: 1 ChangeByte- 00:10:59.401 [2024-04-24 10:06:12.454855] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x9999 00:10:59.401 [2024-04-24 10:06:12.454929] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009999 00:10:59.401 [2024-04-24 10:06:12.454987] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009999 00:10:59.402 [2024-04-24 10:06:12.455044] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009999 00:10:59.402 [2024-04-24 10:06:12.455169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:cbff0040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.402 [2024-04-24 10:06:12.455194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.402 [2024-04-24 10:06:12.455225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:99998199 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.402 [2024-04-24 10:06:12.455242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:59.402 [2024-04-24 10:06:12.455270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:99998199 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.402 [2024-04-24 10:06:12.455285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:59.402 [2024-04-24 10:06:12.455313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:99998199 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.402 [2024-04-24 10:06:12.455328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:59.402 #17 NEW cov: 11775 ft: 13812 corp: 6/86b lim: 30 exec/s: 0 rss: 68Mb L: 28/28 MS: 5 EraseBytes-ChangeByte-ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:10:59.402 [2024-04-24 10:06:12.515035] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:10:59.402 [2024-04-24 10:06:12.515251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.402 [2024-04-24 10:06:12.515275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.402 [2024-04-24 10:06:12.515306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.402 [2024-04-24 10:06:12.515323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:59.402 [2024-04-24 10:06:12.515351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.402 [2024-04-24 10:06:12.515368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:59.402 #18 NEW cov: 11775 ft: 13960 corp: 7/107b lim: 30 exec/s: 0 rss: 68Mb L: 21/28 MS: 1 CrossOver- 00:10:59.402 [2024-04-24 10:06:12.575191] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:10:59.402 [2024-04-24 10:06:12.575314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.402 [2024-04-24 10:06:12.575338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.402 #21 NEW cov: 11775 ft: 14061 corp: 8/113b lim: 30 exec/s: 0 rss: 68Mb L: 6/28 MS: 3 CrossOver-CopyPart-CopyPart- 00:10:59.402 [2024-04-24 10:06:12.625374] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (4100) > buf size (4096) 00:10:59.402 [2024-04-24 10:06:12.625502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:04000031 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.402 [2024-04-24 10:06:12.625526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.402 #26 NEW cov: 11775 ft: 14116 corp: 9/122b lim: 30 exec/s: 0 rss: 68Mb L: 9/28 MS: 5 CrossOver-ChangeBinInt-ChangeBit-EraseBytes-CrossOver- 00:10:59.660 [2024-04-24 10:06:12.685493] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:10:59.660 [2024-04-24 10:06:12.685695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.660 [2024-04-24 10:06:12.685719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.660 [2024-04-24 10:06:12.685754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.660 [2024-04-24 10:06:12.685770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:59.660 [2024-04-24 10:06:12.685797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.660 [2024-04-24 10:06:12.685812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:59.660 #27 NEW cov: 11775 ft: 14156 corp: 10/143b lim: 30 exec/s: 0 rss: 68Mb L: 21/28 MS: 1 ChangeByte- 00:10:59.660 [2024-04-24 10:06:12.735596] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (50184) > buf size (4096) 00:10:59.660 [2024-04-24 10:06:12.735727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:31010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.660 [2024-04-24 10:06:12.735751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.660 #28 NEW cov: 11775 ft: 14194 corp: 11/150b lim: 30 exec/s: 0 rss: 68Mb L: 7/28 MS: 1 CMP- DE: "\001\000"- 00:10:59.660 [2024-04-24 10:06:12.795720] ctrlr.c:2547:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (63232) > len (4) 00:10:59.660 [2024-04-24 10:06:12.795841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.660 [2024-04-24 10:06:12.795863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.660 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:10:59.660 #29 NEW cov: 11798 ft: 14246 corp: 12/158b lim: 30 exec/s: 0 rss: 68Mb L: 8/28 MS: 1 ChangeBinInt- 00:10:59.660 [2024-04-24 10:06:12.855918] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:10:59.660 [2024-04-24 10:06:12.855992] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (227332) > buf size (4096) 00:10:59.660 [2024-04-24 10:06:12.856052] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524292) > buf size (4096) 00:10:59.660 [2024-04-24 10:06:12.856171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.660 [2024-04-24 10:06:12.856194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.660 [2024-04-24 10:06:12.856225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:de000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.660 [2024-04-24 10:06:12.856242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:59.660 [2024-04-24 10:06:12.856270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.660 [2024-04-24 10:06:12.856285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:59.660 #30 NEW cov: 11798 ft: 14294 corp: 13/179b lim: 30 exec/s: 0 rss: 68Mb L: 21/28 MS: 1 CopyPart- 00:10:59.660 [2024-04-24 10:06:12.906024] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (4116) > buf size (4096) 00:10:59.660 [2024-04-24 10:06:12.906156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:04040031 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.660 [2024-04-24 10:06:12.906179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.918 #31 NEW cov: 11798 ft: 14320 corp: 14/188b lim: 30 exec/s: 31 rss: 69Mb L: 9/28 MS: 1 ChangeBit- 00:10:59.918 [2024-04-24 10:06:12.976269] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:10:59.918 [2024-04-24 10:06:12.976479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.918 [2024-04-24 10:06:12.976503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.918 [2024-04-24 10:06:12.976537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.918 [2024-04-24 10:06:12.976555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:59.918 [2024-04-24 10:06:12.976584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.918 [2024-04-24 10:06:12.976600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:59.918 #32 NEW cov: 11798 ft: 14337 corp: 15/209b lim: 30 exec/s: 32 rss: 69Mb L: 21/28 MS: 1 ChangeBit- 00:10:59.918 [2024-04-24 10:06:13.036374] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (48328) > buf size (4096) 00:10:59.918 [2024-04-24 10:06:13.036497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2f310001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.918 [2024-04-24 10:06:13.036520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.918 #36 NEW cov: 11798 ft: 14352 corp: 16/217b lim: 30 exec/s: 36 rss: 69Mb L: 8/28 MS: 4 ChangeByte-ShuffleBytes-ChangeBit-CrossOver- 00:10:59.918 [2024-04-24 10:06:13.086502] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (50180) > buf size (4096) 00:10:59.918 [2024-04-24 10:06:13.086618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:310000bf cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.918 [2024-04-24 10:06:13.086640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.918 #37 NEW cov: 11798 ft: 14382 corp: 17/224b lim: 30 exec/s: 37 rss: 69Mb L: 7/28 MS: 1 ChangeByte- 00:10:59.918 [2024-04-24 10:06:13.136620] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:10:59.918 [2024-04-24 10:06:13.136692] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (227332) > buf size (4096) 00:10:59.918 [2024-04-24 10:06:13.136750] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524292) > buf size (4096) 00:10:59.918 [2024-04-24 10:06:13.136856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.918 [2024-04-24 10:06:13.136878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:59.918 [2024-04-24 10:06:13.136909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:de000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.918 [2024-04-24 10:06:13.136926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:59.918 [2024-04-24 10:06:13.136953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:59.918 [2024-04-24 10:06:13.136968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:59.918 #38 NEW cov: 11798 ft: 14416 corp: 18/245b lim: 30 exec/s: 38 rss: 69Mb L: 21/28 MS: 1 ChangeBit- 00:11:00.176 [2024-04-24 10:06:13.196854] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:11:00.176 [2024-04-24 10:06:13.196938] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (227332) > buf size (4096) 00:11:00.176 [2024-04-24 10:06:13.197110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.176 [2024-04-24 10:06:13.197134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:00.176 [2024-04-24 10:06:13.197169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:de000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.176 [2024-04-24 10:06:13.197186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:00.176 [2024-04-24 10:06:13.197217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.176 [2024-04-24 10:06:13.197236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:00.176 #39 NEW cov: 11798 ft: 14444 corp: 19/266b lim: 30 exec/s: 39 rss: 69Mb L: 21/28 MS: 1 ShuffleBytes- 00:11:00.176 [2024-04-24 10:06:13.256972] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:11:00.176 [2024-04-24 10:06:13.257044] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (227332) > buf size (4096) 00:11:00.176 [2024-04-24 10:06:13.257110] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524292) > buf size (4096) 00:11:00.176 [2024-04-24 10:06:13.257221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.176 [2024-04-24 10:06:13.257242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:00.176 [2024-04-24 10:06:13.257273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:de000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.176 [2024-04-24 10:06:13.257288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:00.176 [2024-04-24 10:06:13.257315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.176 [2024-04-24 10:06:13.257331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:00.176 #40 NEW cov: 11798 ft: 14479 corp: 20/287b lim: 30 exec/s: 40 rss: 69Mb L: 21/28 MS: 1 ChangeBit- 00:11:00.176 [2024-04-24 10:06:13.307046] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (310276) > buf size (4096) 00:11:00.176 [2024-04-24 10:06:13.307177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2f008131 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.176 [2024-04-24 10:06:13.307199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:00.176 #41 NEW cov: 11798 ft: 14491 corp: 21/295b lim: 30 exec/s: 41 rss: 69Mb L: 8/28 MS: 1 ShuffleBytes- 00:11:00.176 [2024-04-24 10:06:13.367246] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (48328) > buf size (4096) 00:11:00.176 [2024-04-24 10:06:13.367361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2f310001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.176 [2024-04-24 10:06:13.367383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:00.176 #42 NEW cov: 11798 ft: 14498 corp: 22/303b lim: 30 exec/s: 42 rss: 69Mb L: 8/28 MS: 1 ShuffleBytes- 00:11:00.176 [2024-04-24 10:06:13.427394] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xf73e 00:11:00.176 [2024-04-24 10:06:13.427510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.176 [2024-04-24 10:06:13.427537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:00.435 #43 NEW cov: 11798 ft: 14511 corp: 23/309b lim: 30 exec/s: 43 rss: 69Mb L: 6/28 MS: 1 EraseBytes- 00:11:00.435 [2024-04-24 10:06:13.487610] ctrlr.c:2547:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (256) > len (4) 00:11:00.435 [2024-04-24 10:06:13.487742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.435 [2024-04-24 10:06:13.487767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:00.435 #44 NEW cov: 11798 ft: 14547 corp: 24/319b lim: 30 exec/s: 44 rss: 69Mb L: 10/28 MS: 1 PersAutoDict- DE: "\001\000"- 00:11:00.435 [2024-04-24 10:06:13.537875] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:11:00.435 [2024-04-24 10:06:13.538081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.435 [2024-04-24 10:06:13.538105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:00.435 [2024-04-24 10:06:13.538137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00de0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.435 [2024-04-24 10:06:13.538154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:00.435 [2024-04-24 10:06:13.538182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.435 [2024-04-24 10:06:13.538197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:00.435 #45 NEW cov: 11798 ft: 14632 corp: 25/341b lim: 30 exec/s: 45 rss: 69Mb L: 22/28 MS: 1 InsertByte- 00:11:00.435 [2024-04-24 10:06:13.588024] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x9999 00:11:00.435 [2024-04-24 10:06:13.588103] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009999 00:11:00.435 [2024-04-24 10:06:13.588162] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009999 00:11:00.435 [2024-04-24 10:06:13.588219] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009999 00:11:00.435 [2024-04-24 10:06:13.588321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:cbff0040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.435 [2024-04-24 10:06:13.588342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:00.435 [2024-04-24 10:06:13.588373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:99998199 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.435 [2024-04-24 10:06:13.588389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:00.435 [2024-04-24 10:06:13.588417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:99998199 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.435 [2024-04-24 10:06:13.588432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:00.435 [2024-04-24 10:06:13.588460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:99998199 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.435 [2024-04-24 10:06:13.588475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:00.435 #46 NEW cov: 11798 ft: 14714 corp: 26/369b lim: 30 exec/s: 46 rss: 69Mb L: 28/28 MS: 1 ShuffleBytes- 00:11:00.435 [2024-04-24 10:06:13.658212] ctrlr.c:2547:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (63232) > len (4) 00:11:00.435 [2024-04-24 10:06:13.658331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.435 [2024-04-24 10:06:13.658353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:00.435 #47 NEW cov: 11798 ft: 14722 corp: 27/377b lim: 30 exec/s: 47 rss: 69Mb L: 8/28 MS: 1 CopyPart- 00:11:00.435 [2024-04-24 10:06:13.708331] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (310276) > buf size (4096) 00:11:00.435 [2024-04-24 10:06:13.708456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2f008131 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.435 [2024-04-24 10:06:13.708481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:00.694 #48 NEW cov: 11798 ft: 14745 corp: 28/385b lim: 30 exec/s: 48 rss: 69Mb L: 8/28 MS: 1 ChangeBit- 00:11:00.694 [2024-04-24 10:06:13.758443] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:11:00.694 [2024-04-24 10:06:13.758518] ctrlr.c:2547:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (56832) > len (4) 00:11:00.694 [2024-04-24 10:06:13.758707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.694 [2024-04-24 10:06:13.758729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:00.694 [2024-04-24 10:06:13.758760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.694 [2024-04-24 10:06:13.758775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:00.694 [2024-04-24 10:06:13.758804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.694 [2024-04-24 10:06:13.758820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:00.694 [2024-04-24 10:06:13.758847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.694 [2024-04-24 10:06:13.758863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:00.695 #49 NEW cov: 11798 ft: 14784 corp: 29/410b lim: 30 exec/s: 49 rss: 69Mb L: 25/28 MS: 1 InsertRepeatedBytes- 00:11:00.695 [2024-04-24 10:06:13.818602] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x312f 00:11:00.695 [2024-04-24 10:06:13.818724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.695 [2024-04-24 10:06:13.818746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:00.695 #50 NEW cov: 11805 ft: 14806 corp: 30/418b lim: 30 exec/s: 50 rss: 70Mb L: 8/28 MS: 1 ShuffleBytes- 00:11:00.695 [2024-04-24 10:06:13.878782] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10276) > buf size (4096) 00:11:00.695 [2024-04-24 10:06:13.878854] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (227332) > buf size (4096) 00:11:00.695 [2024-04-24 10:06:13.879041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.695 [2024-04-24 10:06:13.879071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:00.695 [2024-04-24 10:06:13.879106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:de000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.695 [2024-04-24 10:06:13.879122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:00.695 [2024-04-24 10:06:13.879150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.695 [2024-04-24 10:06:13.879165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:00.695 [2024-04-24 10:06:13.879192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:00.695 [2024-04-24 10:06:13.879207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:00.695 #51 NEW cov: 11805 ft: 14828 corp: 31/446b lim: 30 exec/s: 25 rss: 70Mb L: 28/28 MS: 1 CopyPart- 00:11:00.695 #51 DONE cov: 11805 ft: 14828 corp: 31/446b lim: 30 exec/s: 25 rss: 70Mb 00:11:00.695 ###### Recommended dictionary. ###### 00:11:00.695 "\001\000" # Uses: 1 00:11:00.695 ###### End of recommended dictionary. ###### 00:11:00.695 Done 51 runs in 2 second(s) 00:11:00.953 10:06:14 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_1.conf 00:11:00.953 10:06:14 -- ../common.sh@72 -- # (( i++ )) 00:11:00.953 10:06:14 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:00.953 10:06:14 -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:11:00.953 10:06:14 -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:11:00.953 10:06:14 -- nvmf/run.sh@24 -- # local timen=1 00:11:00.953 10:06:14 -- nvmf/run.sh@25 -- # local core=0x1 00:11:00.953 10:06:14 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:11:00.953 10:06:14 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:11:00.953 10:06:14 -- nvmf/run.sh@29 -- # printf %02d 2 00:11:00.953 10:06:14 -- nvmf/run.sh@29 -- # port=4402 00:11:00.953 10:06:14 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:11:00.953 10:06:14 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:11:00.953 10:06:14 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:00.953 10:06:14 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 -r /var/tmp/spdk2.sock 00:11:00.953 [2024-04-24 10:06:14.095421] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:00.953 [2024-04-24 10:06:14.095493] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168498 ] 00:11:00.953 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.212 [2024-04-24 10:06:14.362738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.212 [2024-04-24 10:06:14.449323] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:01.212 [2024-04-24 10:06:14.449451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.470 [2024-04-24 10:06:14.507915] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.470 [2024-04-24 10:06:14.524123] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:11:01.470 INFO: Running with entropic power schedule (0xFF, 100). 00:11:01.470 INFO: Seed: 1891681038 00:11:01.470 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:01.470 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:01.470 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:11:01.470 INFO: A corpus is not provided, starting from an empty corpus 00:11:01.470 #2 INITED exec/s: 0 rss: 60Mb 00:11:01.470 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:01.470 This may also happen if the target rejected all inputs we tried so far 00:11:01.470 [2024-04-24 10:06:14.569078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.470 [2024-04-24 10:06:14.569113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:01.470 [2024-04-24 10:06:14.569147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.470 [2024-04-24 10:06:14.569162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:01.470 [2024-04-24 10:06:14.569191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.470 [2024-04-24 10:06:14.569206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:01.470 [2024-04-24 10:06:14.569235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.470 [2024-04-24 10:06:14.569250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:01.470 [2024-04-24 10:06:14.569278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.470 [2024-04-24 10:06:14.569292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:01.729 NEW_FUNC[1/662]: 0x482f10 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:11:01.729 NEW_FUNC[2/662]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:01.729 #3 NEW cov: 11488 ft: 11490 corp: 2/36b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:11:01.729 [2024-04-24 10:06:14.899867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.729 [2024-04-24 10:06:14.899931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:01.729 [2024-04-24 10:06:14.899967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:48080048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.729 [2024-04-24 10:06:14.899984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:01.729 [2024-04-24 10:06:14.900014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.729 [2024-04-24 10:06:14.900030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:01.729 [2024-04-24 10:06:14.900066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.729 [2024-04-24 10:06:14.900083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:01.729 [2024-04-24 10:06:14.900113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.729 [2024-04-24 10:06:14.900130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:01.729 NEW_FUNC[1/1]: 0xf01470 in rte_rdtsc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/rte_cycles.h:31 00:11:01.729 #9 NEW cov: 11602 ft: 11946 corp: 3/71b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 ChangeBit- 00:11:01.729 [2024-04-24 10:06:14.969721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c9c900c9 cdw11:c900c9c9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.729 [2024-04-24 10:06:14.969753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:01.988 #11 NEW cov: 11608 ft: 12989 corp: 4/79b lim: 35 exec/s: 0 rss: 68Mb L: 8/35 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:11:01.988 [2024-04-24 10:06:15.030049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.030090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.030124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.030139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.030167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.030183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.030211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004821 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.030226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.030254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.030269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:01.988 #12 NEW cov: 11693 ft: 13270 corp: 5/114b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 ChangeByte- 00:11:01.988 [2024-04-24 10:06:15.080021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c9c900c9 cdw11:c900c9c9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.080053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:01.988 #16 NEW cov: 11693 ft: 13480 corp: 6/121b lim: 35 exec/s: 0 rss: 68Mb L: 7/35 MS: 4 ChangeBit-ChangeByte-CopyPart-CrossOver- 00:11:01.988 [2024-04-24 10:06:15.130337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.130367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.130401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:480a0048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.130416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.130445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.130460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.130488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004821 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.130507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.130535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.130550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:01.988 #17 NEW cov: 11693 ft: 13565 corp: 7/156b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 CopyPart- 00:11:01.988 [2024-04-24 10:06:15.190499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.190528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.190562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:b7fb00b7 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.190578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.190606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.190622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.190650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.190665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.190693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.190708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:01.988 #18 NEW cov: 11693 ft: 13661 corp: 8/191b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 ChangeBinInt- 00:11:01.988 [2024-04-24 10:06:15.260819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.260853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.260889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:48080048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.260906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.260938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.260955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.260986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.261002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:01.988 [2024-04-24 10:06:15.261031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.988 [2024-04-24 10:06:15.261047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:02.247 #19 NEW cov: 11693 ft: 13687 corp: 9/226b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 ShuffleBytes- 00:11:02.247 [2024-04-24 10:06:15.310661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c9c900c9 cdw11:c900c9c9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.247 [2024-04-24 10:06:15.310693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:02.247 #20 NEW cov: 11693 ft: 13740 corp: 10/233b lim: 35 exec/s: 0 rss: 68Mb L: 7/35 MS: 1 ShuffleBytes- 00:11:02.247 [2024-04-24 10:06:15.370986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.247 [2024-04-24 10:06:15.371017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:02.247 [2024-04-24 10:06:15.371050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:b74800b7 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.247 [2024-04-24 10:06:15.371074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:02.247 [2024-04-24 10:06:15.371104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.247 [2024-04-24 10:06:15.371123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:02.247 [2024-04-24 10:06:15.371151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.247 [2024-04-24 10:06:15.371167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:02.247 #21 NEW cov: 11693 ft: 13791 corp: 11/265b lim: 35 exec/s: 0 rss: 68Mb L: 32/35 MS: 1 CrossOver- 00:11:02.247 [2024-04-24 10:06:15.431042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.247 [2024-04-24 10:06:15.431081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:02.247 [2024-04-24 10:06:15.431131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:48080048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.247 [2024-04-24 10:06:15.431148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:02.247 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:02.247 #22 NEW cov: 11710 ft: 14081 corp: 12/285b lim: 35 exec/s: 0 rss: 69Mb L: 20/35 MS: 1 EraseBytes- 00:11:02.247 [2024-04-24 10:06:15.491260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.247 [2024-04-24 10:06:15.491291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:02.247 [2024-04-24 10:06:15.491323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.247 [2024-04-24 10:06:15.491339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:02.247 [2024-04-24 10:06:15.491367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48002148 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.247 [2024-04-24 10:06:15.491383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:02.507 #23 NEW cov: 11710 ft: 14272 corp: 13/312b lim: 35 exec/s: 0 rss: 69Mb L: 27/35 MS: 1 EraseBytes- 00:11:02.507 [2024-04-24 10:06:15.551477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.507 [2024-04-24 10:06:15.551508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:02.507 [2024-04-24 10:06:15.551541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:48080048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.507 [2024-04-24 10:06:15.551557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:02.507 [2024-04-24 10:06:15.551585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.507 [2024-04-24 10:06:15.551601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:02.507 [2024-04-24 10:06:15.551629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.507 [2024-04-24 10:06:15.551644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:02.507 [2024-04-24 10:06:15.551671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.507 [2024-04-24 10:06:15.551686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:02.507 #24 NEW cov: 11710 ft: 14319 corp: 14/347b lim: 35 exec/s: 24 rss: 69Mb L: 35/35 MS: 1 ShuffleBytes- 00:11:02.507 [2024-04-24 10:06:15.601665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48002c48 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.507 [2024-04-24 10:06:15.601696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:02.507 [2024-04-24 10:06:15.601731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:48080048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.507 [2024-04-24 10:06:15.601747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:02.507 [2024-04-24 10:06:15.601777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.507 [2024-04-24 10:06:15.601794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:02.507 [2024-04-24 10:06:15.601824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.507 [2024-04-24 10:06:15.601841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:02.507 [2024-04-24 10:06:15.601871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.507 [2024-04-24 10:06:15.601886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:02.507 #25 NEW cov: 11710 ft: 14407 corp: 15/382b lim: 35 exec/s: 25 rss: 69Mb L: 35/35 MS: 1 ChangeByte- 00:11:02.508 [2024-04-24 10:06:15.651478] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:02.508 [2024-04-24 10:06:15.651552] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:02.508 [2024-04-24 10:06:15.651611] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:02.508 [2024-04-24 10:06:15.651715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.508 [2024-04-24 10:06:15.651740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:02.508 [2024-04-24 10:06:15.651771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.508 [2024-04-24 10:06:15.651788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:02.508 [2024-04-24 10:06:15.651817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.508 [2024-04-24 10:06:15.651833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:02.508 #27 NEW cov: 11719 ft: 14437 corp: 16/406b lim: 35 exec/s: 27 rss: 69Mb L: 24/35 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:11:02.508 [2024-04-24 10:06:15.701935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.508 [2024-04-24 10:06:15.701966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:02.508 [2024-04-24 10:06:15.702001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:b7fb00b7 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.508 [2024-04-24 10:06:15.702018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:02.508 [2024-04-24 10:06:15.702048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.508 [2024-04-24 10:06:15.702071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:02.508 [2024-04-24 10:06:15.702102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.508 [2024-04-24 10:06:15.702118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:02.508 [2024-04-24 10:06:15.702148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.508 [2024-04-24 10:06:15.702164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:02.508 #28 NEW cov: 11719 ft: 14475 corp: 17/441b lim: 35 exec/s: 28 rss: 69Mb L: 35/35 MS: 1 ChangeByte- 00:11:02.508 [2024-04-24 10:06:15.751758] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:02.508 [2024-04-24 10:06:15.751832] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:02.508 [2024-04-24 10:06:15.751890] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:02.508 [2024-04-24 10:06:15.751994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.508 [2024-04-24 10:06:15.752016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:02.508 [2024-04-24 10:06:15.752048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.508 [2024-04-24 10:06:15.752073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:02.508 [2024-04-24 10:06:15.752102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.508 [2024-04-24 10:06:15.752122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:02.767 #29 NEW cov: 11719 ft: 14525 corp: 18/465b lim: 35 exec/s: 29 rss: 69Mb L: 24/35 MS: 1 CopyPart- 00:11:02.767 [2024-04-24 10:06:15.811945] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:02.767 [2024-04-24 10:06:15.812022] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:02.767 [2024-04-24 10:06:15.812091] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:02.767 [2024-04-24 10:06:15.812202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.812225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.812258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.812276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.812306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.812323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:02.767 #30 NEW cov: 11719 ft: 14583 corp: 19/489b lim: 35 exec/s: 30 rss: 69Mb L: 24/35 MS: 1 ChangeBit- 00:11:02.767 [2024-04-24 10:06:15.872368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4848002c cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.872397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.872430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:48080048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.872446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.872474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.872490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.872518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.872533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.872561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.872576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:02.767 #31 NEW cov: 11719 ft: 14595 corp: 20/524b lim: 35 exec/s: 31 rss: 69Mb L: 35/35 MS: 1 ShuffleBytes- 00:11:02.767 [2024-04-24 10:06:15.932529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.932558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.932592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:b7fb00b7 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.932611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.932639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.932655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.932684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.932700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.932727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:62480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.932742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:02.767 #32 NEW cov: 11719 ft: 14630 corp: 21/559b lim: 35 exec/s: 32 rss: 69Mb L: 35/35 MS: 1 ChangeByte- 00:11:02.767 [2024-04-24 10:06:15.992683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.992713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.992746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:487b0048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.992761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.992790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.992806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.992834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004821 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.992851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:02.767 [2024-04-24 10:06:15.992878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.767 [2024-04-24 10:06:15.992893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:02.767 #33 NEW cov: 11719 ft: 14643 corp: 22/594b lim: 35 exec/s: 33 rss: 69Mb L: 35/35 MS: 1 ChangeByte- 00:11:03.026 [2024-04-24 10:06:16.052846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.052876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:03.026 [2024-04-24 10:06:16.052909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:b7fb00b7 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.052924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:03.026 [2024-04-24 10:06:16.052952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.052968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:03.026 [2024-04-24 10:06:16.053000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.053016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:03.026 [2024-04-24 10:06:16.053044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.053066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:03.026 #34 NEW cov: 11719 ft: 14687 corp: 23/629b lim: 35 exec/s: 34 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:11:03.026 [2024-04-24 10:06:16.102860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.102893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:03.026 [2024-04-24 10:06:16.102925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.102941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:03.026 [2024-04-24 10:06:16.102969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:4848004a cdw11:48002148 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.102985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:03.026 #35 NEW cov: 11719 ft: 14706 corp: 24/656b lim: 35 exec/s: 35 rss: 69Mb L: 27/35 MS: 1 ChangeBit- 00:11:03.026 [2024-04-24 10:06:16.162859] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:03.026 [2024-04-24 10:06:16.162934] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:03.026 [2024-04-24 10:06:16.162992] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:03.026 [2024-04-24 10:06:16.163111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.163133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:03.026 [2024-04-24 10:06:16.163164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.163181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:03.026 [2024-04-24 10:06:16.163209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000f600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.163226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:03.026 #36 NEW cov: 11719 ft: 14724 corp: 25/680b lim: 35 exec/s: 36 rss: 69Mb L: 24/35 MS: 1 ChangeBinInt- 00:11:03.026 [2024-04-24 10:06:16.213230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.213260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:03.026 [2024-04-24 10:06:16.213293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:b74800b7 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.213309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:03.026 [2024-04-24 10:06:16.213343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.213359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:03.026 [2024-04-24 10:06:16.213387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.213404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:03.026 #37 NEW cov: 11719 ft: 14750 corp: 26/712b lim: 35 exec/s: 37 rss: 69Mb L: 32/35 MS: 1 ShuffleBytes- 00:11:03.026 [2024-04-24 10:06:16.273254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c9c900c9 cdw11:c900c9c9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.026 [2024-04-24 10:06:16.273286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:03.284 #38 NEW cov: 11719 ft: 14759 corp: 27/719b lim: 35 exec/s: 38 rss: 69Mb L: 7/35 MS: 1 ChangeByte- 00:11:03.284 [2024-04-24 10:06:16.323290] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:03.284 [2024-04-24 10:06:16.323364] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:03.284 [2024-04-24 10:06:16.323422] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:03.284 [2024-04-24 10:06:16.323526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.284 [2024-04-24 10:06:16.323547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:03.284 [2024-04-24 10:06:16.323578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.284 [2024-04-24 10:06:16.323596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:03.284 [2024-04-24 10:06:16.323624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.284 [2024-04-24 10:06:16.323640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:03.284 #39 NEW cov: 11719 ft: 14783 corp: 28/741b lim: 35 exec/s: 39 rss: 70Mb L: 22/35 MS: 1 EraseBytes- 00:11:03.284 [2024-04-24 10:06:16.383718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.284 [2024-04-24 10:06:16.383748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:03.284 [2024-04-24 10:06:16.383780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:48080048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.284 [2024-04-24 10:06:16.383796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:03.284 [2024-04-24 10:06:16.383824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:c1004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.284 [2024-04-24 10:06:16.383839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:03.284 [2024-04-24 10:06:16.383867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.284 [2024-04-24 10:06:16.383882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:03.284 [2024-04-24 10:06:16.383913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.284 [2024-04-24 10:06:16.383928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:03.284 #40 NEW cov: 11719 ft: 14795 corp: 29/776b lim: 35 exec/s: 40 rss: 70Mb L: 35/35 MS: 1 ChangeByte- 00:11:03.284 [2024-04-24 10:06:16.443716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4848000a cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.284 [2024-04-24 10:06:16.443748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:03.284 #41 NEW cov: 11726 ft: 14815 corp: 30/788b lim: 35 exec/s: 41 rss: 70Mb L: 12/35 MS: 1 CrossOver- 00:11:03.284 [2024-04-24 10:06:16.493912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.284 [2024-04-24 10:06:16.493942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:03.284 [2024-04-24 10:06:16.493974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:48480048 cdw11:48004821 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.284 [2024-04-24 10:06:16.493990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:03.284 [2024-04-24 10:06:16.494019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.284 [2024-04-24 10:06:16.494034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:03.284 #42 NEW cov: 11726 ft: 14819 corp: 31/809b lim: 35 exec/s: 42 rss: 70Mb L: 21/35 MS: 1 EraseBytes- 00:11:03.284 [2024-04-24 10:06:16.544051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.284 [2024-04-24 10:06:16.544086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:03.285 [2024-04-24 10:06:16.544120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:48480048 cdw11:48004848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.285 [2024-04-24 10:06:16.544135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:03.285 [2024-04-24 10:06:16.544163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:4848004a cdw11:48002148 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:03.285 [2024-04-24 10:06:16.544179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:03.543 #43 NEW cov: 11726 ft: 14867 corp: 32/836b lim: 35 exec/s: 21 rss: 70Mb L: 27/35 MS: 1 ChangeBinInt- 00:11:03.543 #43 DONE cov: 11726 ft: 14867 corp: 32/836b lim: 35 exec/s: 21 rss: 70Mb 00:11:03.543 Done 43 runs in 2 second(s) 00:11:03.543 10:06:16 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_2.conf 00:11:03.543 10:06:16 -- ../common.sh@72 -- # (( i++ )) 00:11:03.543 10:06:16 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:03.543 10:06:16 -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:11:03.543 10:06:16 -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:11:03.543 10:06:16 -- nvmf/run.sh@24 -- # local timen=1 00:11:03.543 10:06:16 -- nvmf/run.sh@25 -- # local core=0x1 00:11:03.543 10:06:16 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:11:03.543 10:06:16 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:11:03.543 10:06:16 -- nvmf/run.sh@29 -- # printf %02d 3 00:11:03.543 10:06:16 -- nvmf/run.sh@29 -- # port=4403 00:11:03.543 10:06:16 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:11:03.543 10:06:16 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:11:03.543 10:06:16 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:03.543 10:06:16 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 -r /var/tmp/spdk3.sock 00:11:03.543 [2024-04-24 10:06:16.774694] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:03.543 [2024-04-24 10:06:16.774789] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168873 ] 00:11:03.543 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.802 [2024-04-24 10:06:17.057011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.061 [2024-04-24 10:06:17.150385] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:04.061 [2024-04-24 10:06:17.150518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.061 [2024-04-24 10:06:17.209100] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.061 [2024-04-24 10:06:17.225316] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:11:04.061 INFO: Running with entropic power schedule (0xFF, 100). 00:11:04.061 INFO: Seed: 296440364 00:11:04.061 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:04.061 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:04.061 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:11:04.061 INFO: A corpus is not provided, starting from an empty corpus 00:11:04.061 #2 INITED exec/s: 0 rss: 60Mb 00:11:04.061 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:04.061 This may also happen if the target rejected all inputs we tried so far 00:11:04.319 NEW_FUNC[1/650]: 0x484be0 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:11:04.319 NEW_FUNC[2/650]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:04.319 #5 NEW cov: 11378 ft: 11379 corp: 2/5b lim: 20 exec/s: 0 rss: 68Mb L: 4/4 MS: 3 InsertByte-ShuffleBytes-CopyPart- 00:11:04.577 NEW_FUNC[1/2]: 0x175bcc0 in nvme_tcp_qpair /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_tcp.c:171 00:11:04.577 NEW_FUNC[2/2]: 0x19994a0 in spdk_sock_recv /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:458 00:11:04.577 #8 NEW cov: 11500 ft: 11981 corp: 3/9b lim: 20 exec/s: 0 rss: 68Mb L: 4/4 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:11:04.577 NEW_FUNC[1/4]: 0x113a320 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3224 00:11:04.577 NEW_FUNC[2/4]: 0x113aea0 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3166 00:11:04.577 #13 NEW cov: 11589 ft: 12452 corp: 4/15b lim: 20 exec/s: 0 rss: 68Mb L: 6/6 MS: 5 EraseBytes-ChangeBit-ShuffleBytes-CrossOver-CopyPart- 00:11:04.577 #14 NEW cov: 11674 ft: 12645 corp: 5/22b lim: 20 exec/s: 0 rss: 68Mb L: 7/7 MS: 1 CrossOver- 00:11:04.577 #15 NEW cov: 11674 ft: 12809 corp: 6/26b lim: 20 exec/s: 0 rss: 68Mb L: 4/7 MS: 1 ShuffleBytes- 00:11:04.577 #16 NEW cov: 11691 ft: 13268 corp: 7/39b lim: 20 exec/s: 0 rss: 68Mb L: 13/13 MS: 1 InsertRepeatedBytes- 00:11:04.577 #17 NEW cov: 11708 ft: 13510 corp: 8/56b lim: 20 exec/s: 0 rss: 68Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:11:04.835 #23 NEW cov: 11708 ft: 13537 corp: 9/60b lim: 20 exec/s: 0 rss: 69Mb L: 4/17 MS: 1 ShuffleBytes- 00:11:04.835 #24 NEW cov: 11708 ft: 13589 corp: 10/77b lim: 20 exec/s: 0 rss: 69Mb L: 17/17 MS: 1 CopyPart- 00:11:04.835 #25 NEW cov: 11708 ft: 13669 corp: 11/81b lim: 20 exec/s: 0 rss: 69Mb L: 4/17 MS: 1 ChangeByte- 00:11:04.835 #26 NEW cov: 11708 ft: 13698 corp: 12/95b lim: 20 exec/s: 0 rss: 69Mb L: 14/17 MS: 1 InsertByte- 00:11:04.835 #27 NEW cov: 11708 ft: 13753 corp: 13/101b lim: 20 exec/s: 0 rss: 69Mb L: 6/17 MS: 1 ChangeByte- 00:11:04.835 #30 NEW cov: 11708 ft: 13843 corp: 14/107b lim: 20 exec/s: 0 rss: 69Mb L: 6/17 MS: 3 CopyPart-InsertByte-CrossOver- 00:11:04.835 #31 NEW cov: 11708 ft: 13890 corp: 15/114b lim: 20 exec/s: 0 rss: 69Mb L: 7/17 MS: 1 CopyPart- 00:11:05.093 #32 NEW cov: 11708 ft: 13915 corp: 16/120b lim: 20 exec/s: 0 rss: 69Mb L: 6/17 MS: 1 CopyPart- 00:11:05.093 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:05.093 #33 NEW cov: 11731 ft: 13953 corp: 17/133b lim: 20 exec/s: 0 rss: 69Mb L: 13/17 MS: 1 ChangeBinInt- 00:11:05.093 #34 NEW cov: 11731 ft: 14001 corp: 18/150b lim: 20 exec/s: 0 rss: 69Mb L: 17/17 MS: 1 CopyPart- 00:11:05.093 #38 NEW cov: 11731 ft: 14017 corp: 19/154b lim: 20 exec/s: 0 rss: 69Mb L: 4/17 MS: 4 EraseBytes-EraseBytes-InsertByte-CopyPart- 00:11:05.093 #39 NEW cov: 11731 ft: 14108 corp: 20/160b lim: 20 exec/s: 39 rss: 69Mb L: 6/17 MS: 1 ChangeBinInt- 00:11:05.093 #40 NEW cov: 11731 ft: 14142 corp: 21/166b lim: 20 exec/s: 40 rss: 69Mb L: 6/17 MS: 1 ShuffleBytes- 00:11:05.093 #41 NEW cov: 11732 ft: 14428 corp: 22/174b lim: 20 exec/s: 41 rss: 69Mb L: 8/17 MS: 1 CopyPart- 00:11:05.351 #42 NEW cov: 11732 ft: 14430 corp: 23/178b lim: 20 exec/s: 42 rss: 69Mb L: 4/17 MS: 1 ChangeBinInt- 00:11:05.351 #43 NEW cov: 11732 ft: 14473 corp: 24/184b lim: 20 exec/s: 43 rss: 69Mb L: 6/17 MS: 1 CrossOver- 00:11:05.351 #44 NEW cov: 11732 ft: 14510 corp: 25/190b lim: 20 exec/s: 44 rss: 69Mb L: 6/17 MS: 1 CopyPart- 00:11:05.351 #45 NEW cov: 11732 ft: 14537 corp: 26/197b lim: 20 exec/s: 45 rss: 69Mb L: 7/17 MS: 1 CopyPart- 00:11:05.351 #46 NEW cov: 11732 ft: 14547 corp: 27/203b lim: 20 exec/s: 46 rss: 69Mb L: 6/17 MS: 1 ChangeBinInt- 00:11:05.351 #47 NEW cov: 11732 ft: 14554 corp: 28/207b lim: 20 exec/s: 47 rss: 69Mb L: 4/17 MS: 1 ChangeBit- 00:11:05.351 #48 NEW cov: 11732 ft: 14584 corp: 29/213b lim: 20 exec/s: 48 rss: 69Mb L: 6/17 MS: 1 ChangeBit- 00:11:05.610 #49 NEW cov: 11732 ft: 14650 corp: 30/219b lim: 20 exec/s: 49 rss: 69Mb L: 6/17 MS: 1 ChangeBinInt- 00:11:05.610 #50 NEW cov: 11732 ft: 14687 corp: 31/239b lim: 20 exec/s: 50 rss: 69Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:11:05.610 [2024-04-24 10:06:18.724961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.610 [2024-04-24 10:06:18.725006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:05.610 NEW_FUNC[1/16]: 0x13366e0 in _nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3436 00:11:05.610 NEW_FUNC[2/16]: 0x154ce00 in nvme_ctrlr_process_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3090 00:11:05.610 #51 NEW cov: 11974 ft: 14987 corp: 32/255b lim: 20 exec/s: 51 rss: 69Mb L: 16/20 MS: 1 InsertRepeatedBytes- 00:11:05.610 NEW_FUNC[1/2]: 0x10d9c80 in nvmf_ctrlr_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3297 00:11:05.610 NEW_FUNC[2/2]: 0x10da890 in spdk_nvmf_request_get_bdev /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:4736 00:11:05.610 #52 NEW cov: 12005 ft: 15069 corp: 33/261b lim: 20 exec/s: 52 rss: 70Mb L: 6/20 MS: 1 EraseBytes- 00:11:05.610 #53 NEW cov: 12005 ft: 15081 corp: 34/279b lim: 20 exec/s: 53 rss: 70Mb L: 18/20 MS: 1 CrossOver- 00:11:05.610 #54 NEW cov: 12005 ft: 15090 corp: 35/284b lim: 20 exec/s: 54 rss: 70Mb L: 5/20 MS: 1 InsertByte- 00:11:05.868 #55 NEW cov: 12005 ft: 15100 corp: 36/291b lim: 20 exec/s: 55 rss: 70Mb L: 7/20 MS: 1 ChangeBit- 00:11:05.868 #56 NEW cov: 12005 ft: 15104 corp: 37/295b lim: 20 exec/s: 56 rss: 70Mb L: 4/20 MS: 1 CopyPart- 00:11:05.869 #57 NEW cov: 12005 ft: 15109 corp: 38/308b lim: 20 exec/s: 57 rss: 70Mb L: 13/20 MS: 1 ChangeBit- 00:11:05.869 #60 NEW cov: 12005 ft: 15111 corp: 39/313b lim: 20 exec/s: 60 rss: 70Mb L: 5/20 MS: 3 CopyPart-ChangeByte-CrossOver- 00:11:05.869 #61 NEW cov: 12005 ft: 15129 corp: 40/321b lim: 20 exec/s: 61 rss: 70Mb L: 8/20 MS: 1 InsertByte- 00:11:05.869 #62 NEW cov: 12005 ft: 15136 corp: 41/338b lim: 20 exec/s: 62 rss: 70Mb L: 17/20 MS: 1 ChangeByte- 00:11:06.127 #63 NEW cov: 12005 ft: 15176 corp: 42/347b lim: 20 exec/s: 63 rss: 70Mb L: 9/20 MS: 1 InsertByte- 00:11:06.127 #64 NEW cov: 12005 ft: 15194 corp: 43/364b lim: 20 exec/s: 64 rss: 70Mb L: 17/20 MS: 1 ChangeBinInt- 00:11:06.127 #65 NEW cov: 12005 ft: 15207 corp: 44/372b lim: 20 exec/s: 65 rss: 70Mb L: 8/20 MS: 1 InsertByte- 00:11:06.127 #67 NEW cov: 12005 ft: 15219 corp: 45/390b lim: 20 exec/s: 33 rss: 70Mb L: 18/20 MS: 2 CrossOver-InsertRepeatedBytes- 00:11:06.127 #67 DONE cov: 12005 ft: 15219 corp: 45/390b lim: 20 exec/s: 33 rss: 70Mb 00:11:06.127 Done 67 runs in 2 second(s) 00:11:06.386 10:06:19 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_3.conf 00:11:06.386 10:06:19 -- ../common.sh@72 -- # (( i++ )) 00:11:06.386 10:06:19 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:06.386 10:06:19 -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:11:06.386 10:06:19 -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:11:06.386 10:06:19 -- nvmf/run.sh@24 -- # local timen=1 00:11:06.386 10:06:19 -- nvmf/run.sh@25 -- # local core=0x1 00:11:06.386 10:06:19 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:11:06.386 10:06:19 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:11:06.386 10:06:19 -- nvmf/run.sh@29 -- # printf %02d 4 00:11:06.386 10:06:19 -- nvmf/run.sh@29 -- # port=4404 00:11:06.386 10:06:19 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:11:06.386 10:06:19 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:11:06.386 10:06:19 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:06.386 10:06:19 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 -r /var/tmp/spdk4.sock 00:11:06.386 [2024-04-24 10:06:19.487718] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:06.386 [2024-04-24 10:06:19.487793] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169255 ] 00:11:06.386 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.645 [2024-04-24 10:06:19.804885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.645 [2024-04-24 10:06:19.897720] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:06.645 [2024-04-24 10:06:19.897844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.904 [2024-04-24 10:06:19.956462] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.904 [2024-04-24 10:06:19.972660] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:11:06.904 INFO: Running with entropic power schedule (0xFF, 100). 00:11:06.904 INFO: Seed: 3044717372 00:11:06.904 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:06.904 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:06.904 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:11:06.904 INFO: A corpus is not provided, starting from an empty corpus 00:11:06.904 #2 INITED exec/s: 0 rss: 61Mb 00:11:06.904 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:06.904 This may also happen if the target rejected all inputs we tried so far 00:11:06.904 [2024-04-24 10:06:20.028041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbf10a91 cdw11:604d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:06.904 [2024-04-24 10:06:20.028080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.162 NEW_FUNC[1/664]: 0x485cd0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:11:07.162 NEW_FUNC[2/664]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:07.162 #10 NEW cov: 11510 ft: 11511 corp: 2/10b lim: 35 exec/s: 0 rss: 67Mb L: 9/9 MS: 3 ShuffleBytes-CrossOver-CMP- DE: "\221\333\361`M+\011\000"- 00:11:07.162 [2024-04-24 10:06:20.370422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbf10a91 cdw11:604d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.162 [2024-04-24 10:06:20.370470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.162 #11 NEW cov: 11623 ft: 12072 corp: 3/19b lim: 35 exec/s: 0 rss: 67Mb L: 9/9 MS: 1 PersAutoDict- DE: "\221\333\361`M+\011\000"- 00:11:07.162 [2024-04-24 10:06:20.420611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbf10a91 cdw11:604d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.162 [2024-04-24 10:06:20.420636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.421 #12 NEW cov: 11629 ft: 12423 corp: 4/29b lim: 35 exec/s: 0 rss: 67Mb L: 10/10 MS: 1 InsertByte- 00:11:07.421 [2024-04-24 10:06:20.470817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:604d2be4 cdw11:2b090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.421 [2024-04-24 10:06:20.470842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.421 #15 NEW cov: 11714 ft: 12635 corp: 5/36b lim: 35 exec/s: 0 rss: 67Mb L: 7/10 MS: 3 EraseBytes-CopyPart-InsertByte- 00:11:07.421 [2024-04-24 10:06:20.521035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbf10a91 cdw11:604d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.421 [2024-04-24 10:06:20.521064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.421 #16 NEW cov: 11714 ft: 12732 corp: 6/45b lim: 35 exec/s: 0 rss: 68Mb L: 9/10 MS: 1 CopyPart- 00:11:07.421 [2024-04-24 10:06:20.581270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.421 [2024-04-24 10:06:20.581294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.421 #17 NEW cov: 11714 ft: 12827 corp: 7/54b lim: 35 exec/s: 0 rss: 68Mb L: 9/10 MS: 1 ChangeBinInt- 00:11:07.421 [2024-04-24 10:06:20.642709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbf10a91 cdw11:604d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.421 [2024-04-24 10:06:20.642736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.421 [2024-04-24 10:06:20.642820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:a7a709a7 cdw11:a7a70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.421 [2024-04-24 10:06:20.642836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:07.421 [2024-04-24 10:06:20.642925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:a7a7a7a7 cdw11:a7a70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.421 [2024-04-24 10:06:20.642941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:07.421 [2024-04-24 10:06:20.643024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:a7a7a7a7 cdw11:a7a70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.421 [2024-04-24 10:06:20.643039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:07.421 #18 NEW cov: 11714 ft: 13796 corp: 8/86b lim: 35 exec/s: 0 rss: 68Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:11:07.680 [2024-04-24 10:06:20.701775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9391020a cdw11:dbf10002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.680 [2024-04-24 10:06:20.701802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.680 #22 NEW cov: 11714 ft: 13871 corp: 9/93b lim: 35 exec/s: 0 rss: 68Mb L: 7/32 MS: 4 CopyPart-ChangeBit-CrossOver-InsertByte- 00:11:07.680 [2024-04-24 10:06:20.752056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:93a0020a cdw11:91db0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.680 [2024-04-24 10:06:20.752094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.680 #23 NEW cov: 11714 ft: 13894 corp: 10/101b lim: 35 exec/s: 0 rss: 68Mb L: 8/32 MS: 1 InsertByte- 00:11:07.680 [2024-04-24 10:06:20.812340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbf10a91 cdw11:606d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.680 [2024-04-24 10:06:20.812367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.680 #24 NEW cov: 11714 ft: 13988 corp: 11/111b lim: 35 exec/s: 0 rss: 68Mb L: 10/32 MS: 1 ChangeBit- 00:11:07.680 [2024-04-24 10:06:20.862553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:604d36e4 cdw11:2b090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.680 [2024-04-24 10:06:20.862578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.680 #25 NEW cov: 11714 ft: 14006 corp: 12/118b lim: 35 exec/s: 0 rss: 68Mb L: 7/32 MS: 1 ChangeByte- 00:11:07.680 [2024-04-24 10:06:20.912659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0adb0a91 cdw11:f1600002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.680 [2024-04-24 10:06:20.912684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.680 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:07.680 #26 NEW cov: 11737 ft: 14056 corp: 13/128b lim: 35 exec/s: 0 rss: 68Mb L: 10/32 MS: 1 CrossOver- 00:11:07.938 [2024-04-24 10:06:20.962926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fffffdff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.938 [2024-04-24 10:06:20.962951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.938 #27 NEW cov: 11737 ft: 14061 corp: 14/137b lim: 35 exec/s: 0 rss: 68Mb L: 9/32 MS: 1 ChangeBinInt- 00:11:07.938 [2024-04-24 10:06:21.013391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f1600adb cdw11:4d2b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.938 [2024-04-24 10:06:21.013416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.938 [2024-04-24 10:06:21.013495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f16091db cdw11:4d2b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.938 [2024-04-24 10:06:21.013509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:07.938 #28 NEW cov: 11737 ft: 14299 corp: 15/152b lim: 35 exec/s: 28 rss: 68Mb L: 15/32 MS: 1 CopyPart- 00:11:07.938 [2024-04-24 10:06:21.063305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:604d0af1 cdw11:604d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.938 [2024-04-24 10:06:21.063329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.938 #29 NEW cov: 11737 ft: 14331 corp: 16/162b lim: 35 exec/s: 29 rss: 68Mb L: 10/32 MS: 1 CopyPart- 00:11:07.938 [2024-04-24 10:06:21.114639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbf10a91 cdw11:604d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.938 [2024-04-24 10:06:21.114664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.938 [2024-04-24 10:06:21.114743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:a7a7095d cdw11:a7a70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.938 [2024-04-24 10:06:21.114758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:07.938 [2024-04-24 10:06:21.114846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:a7a7a7a7 cdw11:a7a70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.938 [2024-04-24 10:06:21.114861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:07.938 [2024-04-24 10:06:21.114944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:a7a7a7a7 cdw11:a7a70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.938 [2024-04-24 10:06:21.114958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:07.938 #30 NEW cov: 11737 ft: 14343 corp: 17/194b lim: 35 exec/s: 30 rss: 68Mb L: 32/32 MS: 1 ChangeByte- 00:11:07.938 [2024-04-24 10:06:21.173752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f10a0a91 cdw11:db600002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:07.938 [2024-04-24 10:06:21.173777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:07.938 #31 NEW cov: 11737 ft: 14353 corp: 18/204b lim: 35 exec/s: 31 rss: 68Mb L: 10/32 MS: 1 ShuffleBytes- 00:11:08.197 [2024-04-24 10:06:21.224303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbf10a91 cdw11:604d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.197 [2024-04-24 10:06:21.224330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.197 [2024-04-24 10:06:21.224426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:91db0900 cdw11:f1600002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.197 [2024-04-24 10:06:21.224441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:08.197 #37 NEW cov: 11737 ft: 14382 corp: 19/220b lim: 35 exec/s: 37 rss: 68Mb L: 16/32 MS: 1 CopyPart- 00:11:08.197 [2024-04-24 10:06:21.275240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbf10a91 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.197 [2024-04-24 10:06:21.275265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.197 [2024-04-24 10:06:21.275347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.197 [2024-04-24 10:06:21.275362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:08.197 [2024-04-24 10:06:21.275449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.197 [2024-04-24 10:06:21.275462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:08.197 [2024-04-24 10:06:21.275547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.197 [2024-04-24 10:06:21.275561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:08.197 #38 NEW cov: 11737 ft: 14389 corp: 20/252b lim: 35 exec/s: 38 rss: 68Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:11:08.197 [2024-04-24 10:06:21.325055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbf10a91 cdw11:604d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.197 [2024-04-24 10:06:21.325094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.197 [2024-04-24 10:06:21.325198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:a7a709a7 cdw11:a7a70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.197 [2024-04-24 10:06:21.325213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:08.197 [2024-04-24 10:06:21.325301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:a7a7a7a7 cdw11:a7a70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.197 [2024-04-24 10:06:21.325315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:08.197 #39 NEW cov: 11737 ft: 14588 corp: 21/278b lim: 35 exec/s: 39 rss: 68Mb L: 26/32 MS: 1 EraseBytes- 00:11:08.197 [2024-04-24 10:06:21.374509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbf10a91 cdw11:9d920003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.197 [2024-04-24 10:06:21.374533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.197 #40 NEW cov: 11737 ft: 14603 corp: 22/288b lim: 35 exec/s: 40 rss: 69Mb L: 10/32 MS: 1 ChangeBinInt- 00:11:08.197 [2024-04-24 10:06:21.424723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4d2b0a60 cdw11:6a090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.197 [2024-04-24 10:06:21.424746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.197 #41 NEW cov: 11737 ft: 14628 corp: 23/295b lim: 35 exec/s: 41 rss: 69Mb L: 7/32 MS: 1 EraseBytes- 00:11:08.456 [2024-04-24 10:06:21.484842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f16091db cdw11:4d2b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.456 [2024-04-24 10:06:21.484867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.456 #42 NEW cov: 11737 ft: 14693 corp: 24/303b lim: 35 exec/s: 42 rss: 69Mb L: 8/32 MS: 1 PersAutoDict- DE: "\221\333\361`M+\011\000"- 00:11:08.456 [2024-04-24 10:06:21.535445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0adb0a91 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.456 [2024-04-24 10:06:21.535469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.456 [2024-04-24 10:06:21.535556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00f10002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.456 [2024-04-24 10:06:21.535571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:08.456 #43 NEW cov: 11737 ft: 14708 corp: 25/321b lim: 35 exec/s: 43 rss: 69Mb L: 18/32 MS: 1 InsertRepeatedBytes- 00:11:08.456 [2024-04-24 10:06:21.585324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:60e42be4 cdw11:604d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.456 [2024-04-24 10:06:21.585348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.456 #44 NEW cov: 11737 ft: 14723 corp: 26/328b lim: 35 exec/s: 44 rss: 69Mb L: 7/32 MS: 1 CopyPart- 00:11:08.456 [2024-04-24 10:06:21.635510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.456 [2024-04-24 10:06:21.635534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.456 #45 NEW cov: 11737 ft: 14769 corp: 27/337b lim: 35 exec/s: 45 rss: 69Mb L: 9/32 MS: 1 ShuffleBytes- 00:11:08.456 [2024-04-24 10:06:21.686730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0adb0a91 cdw11:96960001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.456 [2024-04-24 10:06:21.686754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.456 [2024-04-24 10:06:21.686835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:96969696 cdw11:96960001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.456 [2024-04-24 10:06:21.686848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:08.456 [2024-04-24 10:06:21.686950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:96969696 cdw11:96960001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.456 [2024-04-24 10:06:21.686965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:08.456 [2024-04-24 10:06:21.687042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:604d96f1 cdw11:2b090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.456 [2024-04-24 10:06:21.687055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:08.456 #46 NEW cov: 11737 ft: 14812 corp: 28/365b lim: 35 exec/s: 46 rss: 69Mb L: 28/32 MS: 1 InsertRepeatedBytes- 00:11:08.715 [2024-04-24 10:06:21.737023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:25250a91 cdw11:25250000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.715 [2024-04-24 10:06:21.737047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.715 [2024-04-24 10:06:21.737136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.715 [2024-04-24 10:06:21.737151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:08.715 [2024-04-24 10:06:21.737230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.715 [2024-04-24 10:06:21.737244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:08.715 [2024-04-24 10:06:21.737331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:25252525 cdw11:f10a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.715 [2024-04-24 10:06:21.737344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:08.715 #47 NEW cov: 11737 ft: 14823 corp: 29/398b lim: 35 exec/s: 47 rss: 69Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:11:08.715 [2024-04-24 10:06:21.796299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:604d2be4 cdw11:2b090002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.715 [2024-04-24 10:06:21.796323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.715 #48 NEW cov: 11737 ft: 14862 corp: 30/406b lim: 35 exec/s: 48 rss: 69Mb L: 8/33 MS: 1 InsertByte- 00:11:08.715 [2024-04-24 10:06:21.846546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbc80a91 cdw11:604d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.715 [2024-04-24 10:06:21.846572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.715 #49 NEW cov: 11737 ft: 14884 corp: 31/416b lim: 35 exec/s: 49 rss: 69Mb L: 10/33 MS: 1 ChangeByte- 00:11:08.715 [2024-04-24 10:06:21.897845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbf10a91 cdw11:604d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.715 [2024-04-24 10:06:21.897877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.715 [2024-04-24 10:06:21.897960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:a7a709a7 cdw11:a7a70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.715 [2024-04-24 10:06:21.897976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:08.715 [2024-04-24 10:06:21.898074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:a7a7a7a5 cdw11:a7a70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.715 [2024-04-24 10:06:21.898090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:08.715 [2024-04-24 10:06:21.898173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:a7a7a7a7 cdw11:a7a70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.715 [2024-04-24 10:06:21.898187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:08.715 #50 NEW cov: 11737 ft: 14899 corp: 32/448b lim: 35 exec/s: 50 rss: 69Mb L: 32/33 MS: 1 ChangeBit- 00:11:08.715 [2024-04-24 10:06:21.947278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:60640af1 cdw11:64640002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.715 [2024-04-24 10:06:21.947303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.715 [2024-04-24 10:06:21.947383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:64646464 cdw11:644d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.715 [2024-04-24 10:06:21.947398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:08.715 #51 NEW cov: 11737 ft: 14922 corp: 33/467b lim: 35 exec/s: 51 rss: 69Mb L: 19/33 MS: 1 InsertRepeatedBytes- 00:11:08.974 [2024-04-24 10:06:21.998020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dbf10a91 cdw11:604d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.974 [2024-04-24 10:06:21.998046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:08.974 [2024-04-24 10:06:21.998138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:a79109a7 cdw11:dbf10002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.974 [2024-04-24 10:06:21.998153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:08.974 [2024-04-24 10:06:21.998228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:09a74d2b cdw11:a7a70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:08.974 [2024-04-24 10:06:21.998242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:08.974 #52 NEW cov: 11737 ft: 14947 corp: 34/493b lim: 35 exec/s: 26 rss: 69Mb L: 26/33 MS: 1 CrossOver- 00:11:08.974 #52 DONE cov: 11737 ft: 14947 corp: 34/493b lim: 35 exec/s: 26 rss: 69Mb 00:11:08.974 ###### Recommended dictionary. ###### 00:11:08.974 "\221\333\361`M+\011\000" # Uses: 2 00:11:08.974 ###### End of recommended dictionary. ###### 00:11:08.974 Done 52 runs in 2 second(s) 00:11:08.974 10:06:22 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_4.conf 00:11:08.974 10:06:22 -- ../common.sh@72 -- # (( i++ )) 00:11:08.974 10:06:22 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:08.974 10:06:22 -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:11:08.974 10:06:22 -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:11:08.974 10:06:22 -- nvmf/run.sh@24 -- # local timen=1 00:11:08.974 10:06:22 -- nvmf/run.sh@25 -- # local core=0x1 00:11:08.974 10:06:22 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:11:08.974 10:06:22 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:11:08.974 10:06:22 -- nvmf/run.sh@29 -- # printf %02d 5 00:11:08.974 10:06:22 -- nvmf/run.sh@29 -- # port=4405 00:11:08.974 10:06:22 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:11:08.974 10:06:22 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:11:08.974 10:06:22 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:08.974 10:06:22 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 -r /var/tmp/spdk5.sock 00:11:08.974 [2024-04-24 10:06:22.207618] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:08.974 [2024-04-24 10:06:22.207704] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169680 ] 00:11:08.974 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.541 [2024-04-24 10:06:22.520009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.541 [2024-04-24 10:06:22.608029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:09.541 [2024-04-24 10:06:22.608164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.541 [2024-04-24 10:06:22.666904] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.541 [2024-04-24 10:06:22.683100] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:11:09.541 INFO: Running with entropic power schedule (0xFF, 100). 00:11:09.541 INFO: Seed: 1459734199 00:11:09.541 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:09.541 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:09.541 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:11:09.541 INFO: A corpus is not provided, starting from an empty corpus 00:11:09.541 #2 INITED exec/s: 0 rss: 61Mb 00:11:09.541 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:09.541 This may also happen if the target rejected all inputs we tried so far 00:11:09.541 [2024-04-24 10:06:22.728704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:09.541 [2024-04-24 10:06:22.728734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:09.541 [2024-04-24 10:06:22.728785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:09.541 [2024-04-24 10:06:22.728800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:09.541 [2024-04-24 10:06:22.728850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:09.541 [2024-04-24 10:06:22.728863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:09.800 NEW_FUNC[1/664]: 0x487e60 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:11:09.800 NEW_FUNC[2/664]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:09.800 #7 NEW cov: 11521 ft: 11522 corp: 2/34b lim: 45 exec/s: 0 rss: 67Mb L: 33/33 MS: 5 ChangeByte-ChangeByte-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:11:09.800 [2024-04-24 10:06:23.039767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:09.800 [2024-04-24 10:06:23.039812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:09.800 [2024-04-24 10:06:23.039868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:09.800 [2024-04-24 10:06:23.039882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:09.800 [2024-04-24 10:06:23.039935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:09.800 [2024-04-24 10:06:23.039949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:09.800 [2024-04-24 10:06:23.040003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:09.800 [2024-04-24 10:06:23.040016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:09.800 #9 NEW cov: 11634 ft: 12420 corp: 3/76b lim: 45 exec/s: 0 rss: 68Mb L: 42/42 MS: 2 CopyPart-InsertRepeatedBytes- 00:11:10.059 [2024-04-24 10:06:23.079774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.079805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.079860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.079873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.079930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.079945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.079999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.080012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:10.059 #10 NEW cov: 11640 ft: 12755 corp: 4/119b lim: 45 exec/s: 0 rss: 68Mb L: 43/43 MS: 1 CrossOver- 00:11:10.059 [2024-04-24 10:06:23.119722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.119748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.119804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.119818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.119874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.119889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.059 #16 NEW cov: 11725 ft: 12985 corp: 5/153b lim: 45 exec/s: 0 rss: 68Mb L: 34/43 MS: 1 InsertByte- 00:11:10.059 [2024-04-24 10:06:23.160189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.160218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.160271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.160284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.160344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.160358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.160414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00009393 cdw11:00930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.160427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.160483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.160497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:10.059 #17 NEW cov: 11725 ft: 13160 corp: 6/198b lim: 45 exec/s: 0 rss: 68Mb L: 45/45 MS: 1 CrossOver- 00:11:10.059 [2024-04-24 10:06:23.200162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.200190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.200245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.200261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.200307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.200321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.200376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.200390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:10.059 #18 NEW cov: 11725 ft: 13253 corp: 7/241b lim: 45 exec/s: 0 rss: 68Mb L: 43/45 MS: 1 ShuffleBytes- 00:11:10.059 [2024-04-24 10:06:23.239919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.239946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.240003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.240019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.240076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.240094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.059 #19 NEW cov: 11725 ft: 13333 corp: 8/275b lim: 45 exec/s: 0 rss: 68Mb L: 34/45 MS: 1 InsertByte- 00:11:10.059 [2024-04-24 10:06:23.280044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.280076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.280132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.280146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.059 #20 NEW cov: 11725 ft: 13669 corp: 9/300b lim: 45 exec/s: 0 rss: 68Mb L: 25/45 MS: 1 EraseBytes- 00:11:10.059 [2024-04-24 10:06:23.320495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.320521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.320577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.320590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.320645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.320659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.059 [2024-04-24 10:06:23.320714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.059 [2024-04-24 10:06:23.320728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:10.318 #21 NEW cov: 11725 ft: 13736 corp: 10/343b lim: 45 exec/s: 0 rss: 68Mb L: 43/45 MS: 1 ShuffleBytes- 00:11:10.318 [2024-04-24 10:06:23.360390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.360417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.318 [2024-04-24 10:06:23.360472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.360486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.318 [2024-04-24 10:06:23.360541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.360556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.318 #22 NEW cov: 11725 ft: 13837 corp: 11/377b lim: 45 exec/s: 0 rss: 68Mb L: 34/45 MS: 1 ChangeBinInt- 00:11:10.318 [2024-04-24 10:06:23.400684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.400711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.318 [2024-04-24 10:06:23.400769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.400786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.318 [2024-04-24 10:06:23.400841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.400856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.318 [2024-04-24 10:06:23.400909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93930093 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.400923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:10.318 #23 NEW cov: 11725 ft: 13854 corp: 12/418b lim: 45 exec/s: 0 rss: 68Mb L: 41/45 MS: 1 EraseBytes- 00:11:10.318 [2024-04-24 10:06:23.440828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.440854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.318 [2024-04-24 10:06:23.440910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.440924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.318 [2024-04-24 10:06:23.440977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.440991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.318 [2024-04-24 10:06:23.441046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:939b0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.441064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:10.318 #24 NEW cov: 11725 ft: 13857 corp: 13/461b lim: 45 exec/s: 0 rss: 68Mb L: 43/45 MS: 1 ChangeBinInt- 00:11:10.318 [2024-04-24 10:06:23.480938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.480964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.318 [2024-04-24 10:06:23.481022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.481036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.318 [2024-04-24 10:06:23.481091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.481106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.318 [2024-04-24 10:06:23.481162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.481175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:10.318 #25 NEW cov: 11725 ft: 13947 corp: 14/505b lim: 45 exec/s: 0 rss: 69Mb L: 44/45 MS: 1 CopyPart- 00:11:10.318 [2024-04-24 10:06:23.530913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.530942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.318 [2024-04-24 10:06:23.530999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.531013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.318 [2024-04-24 10:06:23.531069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.318 [2024-04-24 10:06:23.531083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.318 #26 NEW cov: 11725 ft: 13974 corp: 15/534b lim: 45 exec/s: 0 rss: 69Mb L: 29/45 MS: 1 EraseBytes- 00:11:10.318 [2024-04-24 10:06:23.571035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.319 [2024-04-24 10:06:23.571065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.319 [2024-04-24 10:06:23.571121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.319 [2024-04-24 10:06:23.571136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.319 [2024-04-24 10:06:23.571190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.319 [2024-04-24 10:06:23.571204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.577 #27 NEW cov: 11725 ft: 14014 corp: 16/568b lim: 45 exec/s: 0 rss: 69Mb L: 34/45 MS: 1 ShuffleBytes- 00:11:10.578 [2024-04-24 10:06:23.611310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00ff0000 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.611336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.611390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.611405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.611460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.611474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.611527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.611540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:10.578 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:10.578 #28 NEW cov: 11748 ft: 14058 corp: 17/612b lim: 45 exec/s: 0 rss: 69Mb L: 44/45 MS: 1 InsertRepeatedBytes- 00:11:10.578 [2024-04-24 10:06:23.661583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.661609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.661668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.661682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.661735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.661750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.661805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.661819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.661874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:9393930a cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.661888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:10.578 #29 NEW cov: 11748 ft: 14079 corp: 18/657b lim: 45 exec/s: 0 rss: 69Mb L: 45/45 MS: 1 CopyPart- 00:11:10.578 [2024-04-24 10:06:23.701426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.701454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.701512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.701526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.701582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.701596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.578 #30 NEW cov: 11748 ft: 14096 corp: 19/691b lim: 45 exec/s: 30 rss: 69Mb L: 34/45 MS: 1 ChangeBinInt- 00:11:10.578 [2024-04-24 10:06:23.741503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.741530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.741586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.741600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.741654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:9393932d cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.741669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.578 #31 NEW cov: 11748 ft: 14142 corp: 20/720b lim: 45 exec/s: 31 rss: 69Mb L: 29/45 MS: 1 ChangeByte- 00:11:10.578 [2024-04-24 10:06:23.781643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.781668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.781725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.781740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.781792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.781805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.578 #32 NEW cov: 11748 ft: 14167 corp: 21/753b lim: 45 exec/s: 32 rss: 69Mb L: 33/45 MS: 1 CrossOver- 00:11:10.578 [2024-04-24 10:06:23.821726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.821751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.821807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.821821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.578 [2024-04-24 10:06:23.821874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.578 [2024-04-24 10:06:23.821887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.578 #33 NEW cov: 11748 ft: 14176 corp: 22/787b lim: 45 exec/s: 33 rss: 69Mb L: 34/45 MS: 1 EraseBytes- 00:11:10.837 [2024-04-24 10:06:23.861895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.837 [2024-04-24 10:06:23.861920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.837 [2024-04-24 10:06:23.861976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.837 [2024-04-24 10:06:23.861990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.837 [2024-04-24 10:06:23.862044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.837 [2024-04-24 10:06:23.862064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.837 #34 NEW cov: 11748 ft: 14183 corp: 23/820b lim: 45 exec/s: 34 rss: 69Mb L: 33/45 MS: 1 CrossOver- 00:11:10.837 [2024-04-24 10:06:23.902157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.837 [2024-04-24 10:06:23.902181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.837 [2024-04-24 10:06:23.902234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.837 [2024-04-24 10:06:23.902248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.837 [2024-04-24 10:06:23.902303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.837 [2024-04-24 10:06:23.902318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.837 [2024-04-24 10:06:23.902377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.837 [2024-04-24 10:06:23.902390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:10.837 #35 NEW cov: 11748 ft: 14215 corp: 24/863b lim: 45 exec/s: 35 rss: 69Mb L: 43/45 MS: 1 CopyPart- 00:11:10.837 [2024-04-24 10:06:23.941899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.837 [2024-04-24 10:06:23.941924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.837 [2024-04-24 10:06:23.941981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.837 [2024-04-24 10:06:23.941996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.837 #36 NEW cov: 11748 ft: 14329 corp: 25/889b lim: 45 exec/s: 36 rss: 69Mb L: 26/45 MS: 1 EraseBytes- 00:11:10.837 [2024-04-24 10:06:23.982422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.837 [2024-04-24 10:06:23.982448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.837 [2024-04-24 10:06:23.982505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.837 [2024-04-24 10:06:23.982518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.837 [2024-04-24 10:06:23.982573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.837 [2024-04-24 10:06:23.982587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.837 [2024-04-24 10:06:23.982640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.838 [2024-04-24 10:06:23.982652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:10.838 #37 NEW cov: 11748 ft: 14335 corp: 26/933b lim: 45 exec/s: 37 rss: 69Mb L: 44/45 MS: 1 ChangeBit- 00:11:10.838 [2024-04-24 10:06:24.032099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.838 [2024-04-24 10:06:24.032124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.838 #38 NEW cov: 11748 ft: 15057 corp: 27/950b lim: 45 exec/s: 38 rss: 69Mb L: 17/45 MS: 1 EraseBytes- 00:11:10.838 [2024-04-24 10:06:24.072669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5d930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.838 [2024-04-24 10:06:24.072695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:10.838 [2024-04-24 10:06:24.072752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.838 [2024-04-24 10:06:24.072767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:10.838 [2024-04-24 10:06:24.072823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.838 [2024-04-24 10:06:24.072836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:10.838 [2024-04-24 10:06:24.072891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93930000 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:10.838 [2024-04-24 10:06:24.072906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:10.838 #39 NEW cov: 11748 ft: 15069 corp: 28/992b lim: 45 exec/s: 39 rss: 69Mb L: 42/45 MS: 1 InsertByte- 00:11:11.095 [2024-04-24 10:06:24.122873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.095 [2024-04-24 10:06:24.122899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.095 [2024-04-24 10:06:24.122955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.122968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.096 [2024-04-24 10:06:24.123025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.123039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:11.096 [2024-04-24 10:06:24.123096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.123109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:11.096 #40 NEW cov: 11748 ft: 15166 corp: 29/1035b lim: 45 exec/s: 40 rss: 69Mb L: 43/45 MS: 1 ShuffleBytes- 00:11:11.096 [2024-04-24 10:06:24.162952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.162977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.096 [2024-04-24 10:06:24.163035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.163049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.096 [2024-04-24 10:06:24.163110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.163125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:11.096 [2024-04-24 10:06:24.163178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.163191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:11.096 #41 NEW cov: 11748 ft: 15170 corp: 30/1078b lim: 45 exec/s: 41 rss: 69Mb L: 43/45 MS: 1 ChangeBit- 00:11:11.096 [2024-04-24 10:06:24.203105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:5d930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.203130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.096 [2024-04-24 10:06:24.203188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.203202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.096 [2024-04-24 10:06:24.203257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.203272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:11.096 [2024-04-24 10:06:24.203328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93930000 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.203341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:11.096 #42 NEW cov: 11748 ft: 15183 corp: 31/1122b lim: 45 exec/s: 42 rss: 69Mb L: 44/45 MS: 1 CrossOver- 00:11:11.096 [2024-04-24 10:06:24.242693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.242718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.096 #48 NEW cov: 11748 ft: 15193 corp: 32/1139b lim: 45 exec/s: 48 rss: 70Mb L: 17/45 MS: 1 ChangeByte- 00:11:11.096 [2024-04-24 10:06:24.293190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.293216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.096 [2024-04-24 10:06:24.293271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.293286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.096 [2024-04-24 10:06:24.293343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.293356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:11.096 #49 NEW cov: 11748 ft: 15207 corp: 33/1168b lim: 45 exec/s: 49 rss: 70Mb L: 29/45 MS: 1 EraseBytes- 00:11:11.096 [2024-04-24 10:06:24.333449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.333474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.096 [2024-04-24 10:06:24.333531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.333545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.096 [2024-04-24 10:06:24.333602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.333616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:11.096 [2024-04-24 10:06:24.333672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.096 [2024-04-24 10:06:24.333685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:11.096 #50 NEW cov: 11748 ft: 15214 corp: 34/1205b lim: 45 exec/s: 50 rss: 70Mb L: 37/45 MS: 1 EraseBytes- 00:11:11.355 [2024-04-24 10:06:24.373421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.373451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.373510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.373525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.373579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.373594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:11.355 #51 NEW cov: 11748 ft: 15247 corp: 35/1239b lim: 45 exec/s: 51 rss: 70Mb L: 34/45 MS: 1 CopyPart- 00:11:11.355 [2024-04-24 10:06:24.413340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.413365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.413423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.413438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.355 #52 NEW cov: 11748 ft: 15263 corp: 36/1264b lim: 45 exec/s: 52 rss: 70Mb L: 25/45 MS: 1 EraseBytes- 00:11:11.355 [2024-04-24 10:06:24.453640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.453665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.453722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.453736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.453793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.453808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:11.355 #53 NEW cov: 11748 ft: 15273 corp: 37/1297b lim: 45 exec/s: 53 rss: 70Mb L: 33/45 MS: 1 ShuffleBytes- 00:11:11.355 [2024-04-24 10:06:24.493729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93930a93 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.493755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.493814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.493828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.493883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:9393932d cdw11:21930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.493898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:11.355 #54 NEW cov: 11748 ft: 15282 corp: 38/1326b lim: 45 exec/s: 54 rss: 70Mb L: 29/45 MS: 1 ChangeByte- 00:11:11.355 [2024-04-24 10:06:24.534046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.534074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.534132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:93939393 cdw11:93930004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.534147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.534204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:93939393 cdw11:93000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.534218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.534275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:4fd6092b cdw11:c3710001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.534288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:11.355 #55 NEW cov: 11748 ft: 15302 corp: 39/1367b lim: 45 exec/s: 55 rss: 70Mb L: 41/45 MS: 1 CMP- DE: "\000\011+O\326\303q2"- 00:11:11.355 [2024-04-24 10:06:24.574151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.574177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.574233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.574248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.574302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.574316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.574368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:34000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.574381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.614218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.614262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.355 [2024-04-24 10:06:24.614327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.355 [2024-04-24 10:06:24.614347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.356 [2024-04-24 10:06:24.614410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00001000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.356 [2024-04-24 10:06:24.614429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:11.356 [2024-04-24 10:06:24.614493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:34000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.356 [2024-04-24 10:06:24.614515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:11.615 #57 NEW cov: 11748 ft: 15311 corp: 40/1403b lim: 45 exec/s: 57 rss: 70Mb L: 36/45 MS: 2 CopyPart-ChangeBit- 00:11:11.615 [2024-04-24 10:06:24.654394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.615 [2024-04-24 10:06:24.654421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.615 [2024-04-24 10:06:24.654477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.615 [2024-04-24 10:06:24.654491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.615 [2024-04-24 10:06:24.654544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:33330000 cdw11:33330001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.615 [2024-04-24 10:06:24.654558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:11.615 [2024-04-24 10:06:24.654612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:34010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.615 [2024-04-24 10:06:24.654625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:11.615 #58 NEW cov: 11748 ft: 15333 corp: 41/1442b lim: 45 exec/s: 58 rss: 70Mb L: 39/45 MS: 1 InsertRepeatedBytes- 00:11:11.615 [2024-04-24 10:06:24.694655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00ff0000 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.615 [2024-04-24 10:06:24.694681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:11.615 [2024-04-24 10:06:24.694739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.615 [2024-04-24 10:06:24.694754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:11.615 [2024-04-24 10:06:24.694808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.615 [2024-04-24 10:06:24.694823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:11.615 [2024-04-24 10:06:24.694876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.615 [2024-04-24 10:06:24.694889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:11.616 [2024-04-24 10:06:24.694942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:01000034 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:11.616 [2024-04-24 10:06:24.694957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:11.616 #59 NEW cov: 11748 ft: 15344 corp: 42/1487b lim: 45 exec/s: 29 rss: 70Mb L: 45/45 MS: 1 InsertByte- 00:11:11.616 #59 DONE cov: 11748 ft: 15344 corp: 42/1487b lim: 45 exec/s: 29 rss: 70Mb 00:11:11.616 ###### Recommended dictionary. ###### 00:11:11.616 "\000\011+O\326\303q2" # Uses: 0 00:11:11.616 ###### End of recommended dictionary. ###### 00:11:11.616 Done 59 runs in 2 second(s) 00:11:11.616 10:06:24 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_5.conf 00:11:11.616 10:06:24 -- ../common.sh@72 -- # (( i++ )) 00:11:11.616 10:06:24 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:11.616 10:06:24 -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:11:11.616 10:06:24 -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:11:11.616 10:06:24 -- nvmf/run.sh@24 -- # local timen=1 00:11:11.616 10:06:24 -- nvmf/run.sh@25 -- # local core=0x1 00:11:11.616 10:06:24 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:11:11.616 10:06:24 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:11:11.616 10:06:24 -- nvmf/run.sh@29 -- # printf %02d 6 00:11:11.616 10:06:24 -- nvmf/run.sh@29 -- # port=4406 00:11:11.616 10:06:24 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:11:11.616 10:06:24 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:11:11.616 10:06:24 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:11.616 10:06:24 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 -r /var/tmp/spdk6.sock 00:11:11.875 [2024-04-24 10:06:24.905493] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:11.875 [2024-04-24 10:06:24.905568] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170039 ] 00:11:11.875 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.134 [2024-04-24 10:06:25.216247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.134 [2024-04-24 10:06:25.304824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:12.134 [2024-04-24 10:06:25.304956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.134 [2024-04-24 10:06:25.363623] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.134 [2024-04-24 10:06:25.379825] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:11:12.134 INFO: Running with entropic power schedule (0xFF, 100). 00:11:12.134 INFO: Seed: 4156750860 00:11:12.393 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:12.393 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:12.393 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:11:12.393 INFO: A corpus is not provided, starting from an empty corpus 00:11:12.393 #2 INITED exec/s: 0 rss: 60Mb 00:11:12.393 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:12.393 This may also happen if the target rejected all inputs we tried so far 00:11:12.393 [2024-04-24 10:06:25.435570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.393 [2024-04-24 10:06:25.435600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:12.393 [2024-04-24 10:06:25.435656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.393 [2024-04-24 10:06:25.435670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:12.393 [2024-04-24 10:06:25.435722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.393 [2024-04-24 10:06:25.435736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:12.393 [2024-04-24 10:06:25.435788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.393 [2024-04-24 10:06:25.435802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:12.652 NEW_FUNC[1/659]: 0x48a670 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:11:12.652 NEW_FUNC[2/659]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:12.652 #6 NEW cov: 11414 ft: 11415 corp: 2/10b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 4 ChangeBit-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:11:12.652 [2024-04-24 10:06:25.746236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.746282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:12.652 [2024-04-24 10:06:25.746339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.746356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:12.652 [2024-04-24 10:06:25.746411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.746428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:12.652 [2024-04-24 10:06:25.746484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.746502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:12.652 NEW_FUNC[1/3]: 0x1533be0 in nvme_ctrlr_process_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3789 00:11:12.652 NEW_FUNC[2/3]: 0x1701fc0 in spdk_nvme_probe_poll_async /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme.c:1507 00:11:12.652 #7 NEW cov: 11551 ft: 11882 corp: 3/19b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 ChangeBit- 00:11:12.652 [2024-04-24 10:06:25.796222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.796250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:12.652 [2024-04-24 10:06:25.796300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.796313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:12.652 [2024-04-24 10:06:25.796361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.796375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:12.652 [2024-04-24 10:06:25.796422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.796435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:12.652 #8 NEW cov: 11557 ft: 12107 corp: 4/28b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 ShuffleBytes- 00:11:12.652 [2024-04-24 10:06:25.836324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.836349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:12.652 [2024-04-24 10:06:25.836399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004445 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.836414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:12.652 [2024-04-24 10:06:25.836464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.836478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:12.652 [2024-04-24 10:06:25.836528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.836541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:12.652 #9 NEW cov: 11642 ft: 12400 corp: 5/37b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 ChangeBit- 00:11:12.652 [2024-04-24 10:06:25.876336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.876361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:12.652 [2024-04-24 10:06:25.876412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.876426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:12.652 [2024-04-24 10:06:25.876473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000044f9 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.876487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:12.652 #11 NEW cov: 11642 ft: 12730 corp: 6/43b lim: 10 exec/s: 0 rss: 68Mb L: 6/9 MS: 2 ChangeByte-CrossOver- 00:11:12.652 [2024-04-24 10:06:25.916439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004404 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.916466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:12.652 [2024-04-24 10:06:25.916515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.916528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:12.652 [2024-04-24 10:06:25.916578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000044f9 cdw11:00000000 00:11:12.652 [2024-04-24 10:06:25.916591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:12.912 #12 NEW cov: 11642 ft: 12824 corp: 7/49b lim: 10 exec/s: 0 rss: 68Mb L: 6/9 MS: 1 ChangeBit- 00:11:12.912 [2024-04-24 10:06:25.956689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:25.956716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:12.912 [2024-04-24 10:06:25.956767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:25.956781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:12.912 [2024-04-24 10:06:25.956830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:25.956843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:12.912 [2024-04-24 10:06:25.956890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004464 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:25.956903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:12.912 #13 NEW cov: 11642 ft: 12962 corp: 8/58b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 ChangeBit- 00:11:12.912 [2024-04-24 10:06:25.996682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004404 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:25.996709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:12.912 [2024-04-24 10:06:25.996761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:25.996774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:12.912 [2024-04-24 10:06:25.996822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000030f9 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:25.996836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:12.912 #14 NEW cov: 11642 ft: 13011 corp: 9/64b lim: 10 exec/s: 0 rss: 69Mb L: 6/9 MS: 1 ChangeByte- 00:11:12.912 [2024-04-24 10:06:26.036855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:26.036881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:12.912 [2024-04-24 10:06:26.036932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:26.036946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:12.912 [2024-04-24 10:06:26.036997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000b306 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:26.037010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:12.912 #15 NEW cov: 11642 ft: 13041 corp: 10/70b lim: 10 exec/s: 0 rss: 69Mb L: 6/9 MS: 1 ChangeBinInt- 00:11:12.912 [2024-04-24 10:06:26.076948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:26.076973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:12.912 [2024-04-24 10:06:26.077023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004429 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:26.077038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:12.912 [2024-04-24 10:06:26.077087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:26.077116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:12.912 #16 NEW cov: 11642 ft: 13077 corp: 11/77b lim: 10 exec/s: 0 rss: 69Mb L: 7/9 MS: 1 InsertByte- 00:11:12.912 [2024-04-24 10:06:26.117148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:26.117176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:12.912 [2024-04-24 10:06:26.117226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004445 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:26.117241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:12.912 [2024-04-24 10:06:26.117290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004445 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:26.117304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:12.912 [2024-04-24 10:06:26.117353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:26.117366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:12.912 #17 NEW cov: 11642 ft: 13106 corp: 12/86b lim: 10 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 ChangeBit- 00:11:12.912 [2024-04-24 10:06:26.157037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:26.157070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:12.912 [2024-04-24 10:06:26.157119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000030f9 cdw11:00000000 00:11:12.912 [2024-04-24 10:06:26.157132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:12.912 #18 NEW cov: 11642 ft: 13298 corp: 13/90b lim: 10 exec/s: 0 rss: 69Mb L: 4/9 MS: 1 EraseBytes- 00:11:13.171 [2024-04-24 10:06:26.207391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.171 [2024-04-24 10:06:26.207419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.171 [2024-04-24 10:06:26.207469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.171 [2024-04-24 10:06:26.207482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.171 [2024-04-24 10:06:26.207531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.171 [2024-04-24 10:06:26.207545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.171 [2024-04-24 10:06:26.207593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.171 [2024-04-24 10:06:26.207605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.171 #19 NEW cov: 11642 ft: 13332 corp: 14/99b lim: 10 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 CrossOver- 00:11:13.171 [2024-04-24 10:06:26.247440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007e44 cdw11:00000000 00:11:13.171 [2024-04-24 10:06:26.247466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.171 [2024-04-24 10:06:26.247516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000444 cdw11:00000000 00:11:13.171 [2024-04-24 10:06:26.247529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.171 [2024-04-24 10:06:26.247577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.171 [2024-04-24 10:06:26.247591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.171 #20 NEW cov: 11642 ft: 13352 corp: 15/106b lim: 10 exec/s: 0 rss: 69Mb L: 7/9 MS: 1 InsertByte- 00:11:13.171 [2024-04-24 10:06:26.287653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.171 [2024-04-24 10:06:26.287679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.171 [2024-04-24 10:06:26.287731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.171 [2024-04-24 10:06:26.287744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.171 [2024-04-24 10:06:26.287794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.171 [2024-04-24 10:06:26.287808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.172 [2024-04-24 10:06:26.287857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000443d cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.287872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.172 #21 NEW cov: 11642 ft: 13376 corp: 16/114b lim: 10 exec/s: 0 rss: 69Mb L: 8/9 MS: 1 EraseBytes- 00:11:13.172 [2024-04-24 10:06:26.327758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.327784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.172 [2024-04-24 10:06:26.327833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.327846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.172 [2024-04-24 10:06:26.327895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.327908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.172 [2024-04-24 10:06:26.327956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000443c cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.327969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.172 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:13.172 #22 NEW cov: 11665 ft: 13448 corp: 17/122b lim: 10 exec/s: 0 rss: 69Mb L: 8/9 MS: 1 EraseBytes- 00:11:13.172 [2024-04-24 10:06:26.367751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002b44 cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.367776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.172 [2024-04-24 10:06:26.367827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.367840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.172 [2024-04-24 10:06:26.367889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000b306 cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.367902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.172 #23 NEW cov: 11665 ft: 13469 corp: 18/128b lim: 10 exec/s: 0 rss: 69Mb L: 6/9 MS: 1 ChangeByte- 00:11:13.172 [2024-04-24 10:06:26.408075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.408100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.172 [2024-04-24 10:06:26.408148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.408162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.172 [2024-04-24 10:06:26.408211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.408224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.172 [2024-04-24 10:06:26.408270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.408283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.172 [2024-04-24 10:06:26.408329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00007a3c cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.408346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:13.172 #24 NEW cov: 11665 ft: 13529 corp: 19/138b lim: 10 exec/s: 24 rss: 69Mb L: 10/10 MS: 1 InsertByte- 00:11:13.172 [2024-04-24 10:06:26.447990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004404 cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.448016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.172 [2024-04-24 10:06:26.448071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004446 cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.448085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.172 [2024-04-24 10:06:26.448133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000044f9 cdw11:00000000 00:11:13.172 [2024-04-24 10:06:26.448147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.431 #25 NEW cov: 11665 ft: 13565 corp: 20/144b lim: 10 exec/s: 25 rss: 69Mb L: 6/10 MS: 1 ChangeBinInt- 00:11:13.431 [2024-04-24 10:06:26.487981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.488006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.431 [2024-04-24 10:06:26.488064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.488077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.431 #26 NEW cov: 11665 ft: 13567 corp: 21/149b lim: 10 exec/s: 26 rss: 69Mb L: 5/10 MS: 1 CrossOver- 00:11:13.431 [2024-04-24 10:06:26.528316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.528341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.431 [2024-04-24 10:06:26.528393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004445 cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.528406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.431 [2024-04-24 10:06:26.528465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.528478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.431 [2024-04-24 10:06:26.528528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000443c cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.528542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.431 #27 NEW cov: 11665 ft: 13573 corp: 22/157b lim: 10 exec/s: 27 rss: 69Mb L: 8/10 MS: 1 EraseBytes- 00:11:13.431 [2024-04-24 10:06:26.568572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000644 cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.568597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.431 [2024-04-24 10:06:26.568647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.568661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.431 [2024-04-24 10:06:26.568710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.568723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.431 [2024-04-24 10:06:26.568770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.568784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.431 [2024-04-24 10:06:26.568831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000643c cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.568846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:13.431 #28 NEW cov: 11665 ft: 13581 corp: 23/167b lim: 10 exec/s: 28 rss: 69Mb L: 10/10 MS: 1 InsertByte- 00:11:13.431 [2024-04-24 10:06:26.608701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.608726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.431 [2024-04-24 10:06:26.608776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004445 cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.608790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.431 [2024-04-24 10:06:26.608838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004544 cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.608850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.431 [2024-04-24 10:06:26.608897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.608910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.431 [2024-04-24 10:06:26.608958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000443c cdw11:00000000 00:11:13.431 [2024-04-24 10:06:26.608973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:13.431 #29 NEW cov: 11665 ft: 13596 corp: 24/177b lim: 10 exec/s: 29 rss: 69Mb L: 10/10 MS: 1 CopyPart- 00:11:13.431 [2024-04-24 10:06:26.648794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004404 cdw11:00000000 00:11:13.432 [2024-04-24 10:06:26.648819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.432 [2024-04-24 10:06:26.648869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.432 [2024-04-24 10:06:26.648883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.432 [2024-04-24 10:06:26.648934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000044f9 cdw11:00000000 00:11:13.432 [2024-04-24 10:06:26.648947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.432 [2024-04-24 10:06:26.648997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e2e2 cdw11:00000000 00:11:13.432 [2024-04-24 10:06:26.649010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.432 [2024-04-24 10:06:26.649057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000e2e2 cdw11:00000000 00:11:13.432 [2024-04-24 10:06:26.649077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:13.432 #30 NEW cov: 11665 ft: 13608 corp: 25/187b lim: 10 exec/s: 30 rss: 69Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:11:13.432 [2024-04-24 10:06:26.688782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.432 [2024-04-24 10:06:26.688806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.432 [2024-04-24 10:06:26.688856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004445 cdw11:00000000 00:11:13.432 [2024-04-24 10:06:26.688869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.432 [2024-04-24 10:06:26.688917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002e44 cdw11:00000000 00:11:13.432 [2024-04-24 10:06:26.688931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.432 [2024-04-24 10:06:26.688978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.432 [2024-04-24 10:06:26.688992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.691 #31 NEW cov: 11665 ft: 13653 corp: 26/196b lim: 10 exec/s: 31 rss: 69Mb L: 9/10 MS: 1 InsertByte- 00:11:13.691 [2024-04-24 10:06:26.738860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.738886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.738935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.738949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.739000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000044f9 cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.739013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.691 #32 NEW cov: 11665 ft: 13668 corp: 27/202b lim: 10 exec/s: 32 rss: 70Mb L: 6/10 MS: 1 CrossOver- 00:11:13.691 [2024-04-24 10:06:26.769127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000644 cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.769151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.769201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.769215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.769264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.769278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.769324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.769338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.769385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a3c cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.769399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:13.691 #33 NEW cov: 11665 ft: 13739 corp: 28/212b lim: 10 exec/s: 33 rss: 70Mb L: 10/10 MS: 1 ChangeBinInt- 00:11:13.691 [2024-04-24 10:06:26.809250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c4bb cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.809274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.809324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000bbba cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.809337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.809388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000babb cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.809401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.809451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000bbbb cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.809464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.809513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000443c cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.809526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:13.691 #34 NEW cov: 11665 ft: 13754 corp: 29/222b lim: 10 exec/s: 34 rss: 70Mb L: 10/10 MS: 1 ChangeBinInt- 00:11:13.691 [2024-04-24 10:06:26.849384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e44 cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.849409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.849459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.849472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.849520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000452e cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.849534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.849581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.849594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.849641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000443c cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.849655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:13.691 #35 NEW cov: 11665 ft: 13763 corp: 30/232b lim: 10 exec/s: 35 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:11:13.691 [2024-04-24 10:06:26.889500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000644 cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.889525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.889575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.889589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.691 [2024-04-24 10:06:26.889637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.691 [2024-04-24 10:06:26.889653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.692 [2024-04-24 10:06:26.889700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a3c cdw11:00000000 00:11:13.692 [2024-04-24 10:06:26.889714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.692 [2024-04-24 10:06:26.889759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a3c cdw11:00000000 00:11:13.692 [2024-04-24 10:06:26.889773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:13.692 #36 NEW cov: 11665 ft: 13791 corp: 31/242b lim: 10 exec/s: 36 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:11:13.692 [2024-04-24 10:06:26.929575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000644 cdw11:00000000 00:11:13.692 [2024-04-24 10:06:26.929600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.692 [2024-04-24 10:06:26.929649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.692 [2024-04-24 10:06:26.929663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.692 [2024-04-24 10:06:26.929712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.692 [2024-04-24 10:06:26.929726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.692 [2024-04-24 10:06:26.929772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004464 cdw11:00000000 00:11:13.692 [2024-04-24 10:06:26.929786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.692 [2024-04-24 10:06:26.929831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000443c cdw11:00000000 00:11:13.692 [2024-04-24 10:06:26.929844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:13.692 #37 NEW cov: 11665 ft: 13816 corp: 32/252b lim: 10 exec/s: 37 rss: 70Mb L: 10/10 MS: 1 ShuffleBytes- 00:11:13.953 [2024-04-24 10:06:26.969497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007e44 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:26.969523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:26.969574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:26.969587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:26.969639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:26.969653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.953 #38 NEW cov: 11665 ft: 13887 corp: 33/259b lim: 10 exec/s: 38 rss: 70Mb L: 7/10 MS: 1 CrossOver- 00:11:13.953 [2024-04-24 10:06:27.009688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.009713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.009765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004445 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.009778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.009829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004445 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.009843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.009891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.009904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.953 #39 NEW cov: 11665 ft: 13893 corp: 34/268b lim: 10 exec/s: 39 rss: 70Mb L: 9/10 MS: 1 ShuffleBytes- 00:11:13.953 [2024-04-24 10:06:27.049720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.049745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.049795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.049809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.049859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000443c cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.049872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.953 #40 NEW cov: 11665 ft: 13905 corp: 35/274b lim: 10 exec/s: 40 rss: 70Mb L: 6/10 MS: 1 EraseBytes- 00:11:13.953 [2024-04-24 10:06:27.090006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000644 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.090030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.090085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.090099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.090149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.090163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.090213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004464 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.090226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.090272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000443c cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.090285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:13.953 #41 NEW cov: 11665 ft: 13913 corp: 36/284b lim: 10 exec/s: 41 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:11:13.953 [2024-04-24 10:06:27.130141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.130167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.130218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004419 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.130231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.130284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001919 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.130298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.130345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00001944 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.130359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.130408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000044f9 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.130423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:13.953 #42 NEW cov: 11665 ft: 13922 corp: 37/294b lim: 10 exec/s: 42 rss: 70Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:11:13.953 [2024-04-24 10:06:27.160097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004404 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.160123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.160174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.160187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.160236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000030f1 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.160249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.953 #43 NEW cov: 11665 ft: 13966 corp: 38/300b lim: 10 exec/s: 43 rss: 70Mb L: 6/10 MS: 1 ChangeBit- 00:11:13.953 [2024-04-24 10:06:27.200229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.200255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.200305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.200319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.200368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.200381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:13.953 [2024-04-24 10:06:27.200431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004444 cdw11:00000000 00:11:13.953 [2024-04-24 10:06:27.200444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:13.953 #44 NEW cov: 11665 ft: 13996 corp: 39/309b lim: 10 exec/s: 44 rss: 70Mb L: 9/10 MS: 1 ShuffleBytes- 00:11:14.243 [2024-04-24 10:06:27.240577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000644 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.240604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.240655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.240670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.240718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.240735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.240786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000443c cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.240799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.240848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a3c cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.240863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:14.243 #45 NEW cov: 11665 ft: 14023 corp: 40/319b lim: 10 exec/s: 45 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:11:14.243 [2024-04-24 10:06:27.280496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.280523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.280573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003045 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.280587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.280635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.280649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.280699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000443c cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.280712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:14.243 #46 NEW cov: 11665 ft: 14029 corp: 41/327b lim: 10 exec/s: 46 rss: 70Mb L: 8/10 MS: 1 ChangeByte- 00:11:14.243 [2024-04-24 10:06:27.310443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.310470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.310522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004444 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.310536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.310585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000044f9 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.310599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:14.243 #47 NEW cov: 11665 ft: 14036 corp: 42/333b lim: 10 exec/s: 47 rss: 70Mb L: 6/10 MS: 1 CopyPart- 00:11:14.243 [2024-04-24 10:06:27.350789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000644 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.350816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.350868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00005444 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.350882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.350932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.350949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.350995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004464 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.351008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.351057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000443c cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.351076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:14.243 #48 NEW cov: 11665 ft: 14041 corp: 43/343b lim: 10 exec/s: 48 rss: 70Mb L: 10/10 MS: 1 ChangeBit- 00:11:14.243 [2024-04-24 10:06:27.390773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004444 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.390798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.390849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004429 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.390862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.390913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004444 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.390927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:14.243 [2024-04-24 10:06:27.390974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:11:14.243 [2024-04-24 10:06:27.390987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:14.243 #49 NEW cov: 11665 ft: 14048 corp: 44/351b lim: 10 exec/s: 24 rss: 71Mb L: 8/10 MS: 1 CrossOver- 00:11:14.243 #49 DONE cov: 11665 ft: 14048 corp: 44/351b lim: 10 exec/s: 24 rss: 71Mb 00:11:14.243 Done 49 runs in 2 second(s) 00:11:14.523 10:06:27 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_6.conf 00:11:14.523 10:06:27 -- ../common.sh@72 -- # (( i++ )) 00:11:14.523 10:06:27 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:14.523 10:06:27 -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:11:14.523 10:06:27 -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:11:14.523 10:06:27 -- nvmf/run.sh@24 -- # local timen=1 00:11:14.523 10:06:27 -- nvmf/run.sh@25 -- # local core=0x1 00:11:14.523 10:06:27 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:11:14.523 10:06:27 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:11:14.523 10:06:27 -- nvmf/run.sh@29 -- # printf %02d 7 00:11:14.523 10:06:27 -- nvmf/run.sh@29 -- # port=4407 00:11:14.523 10:06:27 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:11:14.523 10:06:27 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:11:14.523 10:06:27 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:14.523 10:06:27 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 -r /var/tmp/spdk7.sock 00:11:14.523 [2024-04-24 10:06:27.598762] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:14.523 [2024-04-24 10:06:27.598854] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170403 ] 00:11:14.523 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.781 [2024-04-24 10:06:27.901097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.781 [2024-04-24 10:06:27.994261] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:14.781 [2024-04-24 10:06:27.994395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.781 [2024-04-24 10:06:28.053133] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.040 [2024-04-24 10:06:28.069334] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:11:15.040 INFO: Running with entropic power schedule (0xFF, 100). 00:11:15.040 INFO: Seed: 2550783051 00:11:15.040 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:15.040 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:15.040 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:11:15.040 INFO: A corpus is not provided, starting from an empty corpus 00:11:15.040 #2 INITED exec/s: 0 rss: 60Mb 00:11:15.040 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:15.040 This may also happen if the target rejected all inputs we tried so far 00:11:15.040 [2024-04-24 10:06:28.114615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2b cdw11:00000000 00:11:15.040 [2024-04-24 10:06:28.114645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.299 NEW_FUNC[1/662]: 0x48b060 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:11:15.299 NEW_FUNC[2/662]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:15.299 #4 NEW cov: 11438 ft: 11439 corp: 2/3b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 2 ShuffleBytes-InsertByte- 00:11:15.299 [2024-04-24 10:06:28.445408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2b cdw11:00000000 00:11:15.299 [2024-04-24 10:06:28.445448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.299 #5 NEW cov: 11551 ft: 11854 corp: 3/5b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 CopyPart- 00:11:15.299 [2024-04-24 10:06:28.485866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:11:15.299 [2024-04-24 10:06:28.485895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.299 [2024-04-24 10:06:28.485943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.299 [2024-04-24 10:06:28.485956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:15.299 [2024-04-24 10:06:28.486005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.299 [2024-04-24 10:06:28.486019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:15.299 [2024-04-24 10:06:28.486074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.299 [2024-04-24 10:06:28.486089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:15.299 [2024-04-24 10:06:28.486136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.299 [2024-04-24 10:06:28.486149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:15.299 #6 NEW cov: 11557 ft: 12383 corp: 4/15b lim: 10 exec/s: 0 rss: 68Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:11:15.299 [2024-04-24 10:06:28.525510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a46 cdw11:00000000 00:11:15.300 [2024-04-24 10:06:28.525541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.300 #11 NEW cov: 11642 ft: 12754 corp: 5/17b lim: 10 exec/s: 0 rss: 68Mb L: 2/10 MS: 5 ChangeBit-ChangeBit-ChangeByte-CrossOver-InsertByte- 00:11:15.300 [2024-04-24 10:06:28.565963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:11:15.300 [2024-04-24 10:06:28.565989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.300 [2024-04-24 10:06:28.566038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.300 [2024-04-24 10:06:28.566052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:15.300 [2024-04-24 10:06:28.566107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.300 [2024-04-24 10:06:28.566121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:15.300 [2024-04-24 10:06:28.566168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.300 [2024-04-24 10:06:28.566181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:15.559 #17 NEW cov: 11642 ft: 12837 corp: 6/26b lim: 10 exec/s: 0 rss: 68Mb L: 9/10 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:11:15.559 [2024-04-24 10:06:28.595819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.595845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.559 [2024-04-24 10:06:28.595894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002b2b cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.595907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:15.559 #18 NEW cov: 11642 ft: 13034 corp: 7/30b lim: 10 exec/s: 0 rss: 68Mb L: 4/10 MS: 1 CopyPart- 00:11:15.559 [2024-04-24 10:06:28.636215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fdff cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.636243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.559 [2024-04-24 10:06:28.636292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.636307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:15.559 [2024-04-24 10:06:28.636354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.636368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:15.559 [2024-04-24 10:06:28.636415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.636429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:15.559 #19 NEW cov: 11642 ft: 13194 corp: 8/39b lim: 10 exec/s: 0 rss: 68Mb L: 9/10 MS: 1 ChangeBinInt- 00:11:15.559 [2024-04-24 10:06:28.685979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a46 cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.686007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.559 #21 NEW cov: 11642 ft: 13251 corp: 9/41b lim: 10 exec/s: 0 rss: 68Mb L: 2/10 MS: 2 EraseBytes-CrossOver- 00:11:15.559 [2024-04-24 10:06:28.726275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.726304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.559 [2024-04-24 10:06:28.726356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002b2b cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.726371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:15.559 #22 NEW cov: 11642 ft: 13289 corp: 10/46b lim: 10 exec/s: 0 rss: 68Mb L: 5/10 MS: 1 CrossOver- 00:11:15.559 [2024-04-24 10:06:28.776262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002b46 cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.776288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.559 #23 NEW cov: 11642 ft: 13375 corp: 11/48b lim: 10 exec/s: 0 rss: 68Mb L: 2/10 MS: 1 ChangeByte- 00:11:15.559 [2024-04-24 10:06:28.816693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.816719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.559 [2024-04-24 10:06:28.816767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.816781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:15.559 [2024-04-24 10:06:28.816830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.816844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:15.559 [2024-04-24 10:06:28.816891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:15.559 [2024-04-24 10:06:28.816904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:15.818 #25 NEW cov: 11642 ft: 13403 corp: 12/57b lim: 10 exec/s: 0 rss: 68Mb L: 9/10 MS: 2 EraseBytes-InsertRepeatedBytes- 00:11:15.818 [2024-04-24 10:06:28.856713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:11:15.818 [2024-04-24 10:06:28.856738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.818 [2024-04-24 10:06:28.856785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.818 [2024-04-24 10:06:28.856800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:15.818 [2024-04-24 10:06:28.856847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a46 cdw11:00000000 00:11:15.818 [2024-04-24 10:06:28.856860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:15.818 #26 NEW cov: 11642 ft: 13532 corp: 13/63b lim: 10 exec/s: 0 rss: 69Mb L: 6/10 MS: 1 CrossOver- 00:11:15.818 [2024-04-24 10:06:28.897068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.818 [2024-04-24 10:06:28.897093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.818 [2024-04-24 10:06:28.897142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.818 [2024-04-24 10:06:28.897158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:15.818 [2024-04-24 10:06:28.897206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.818 [2024-04-24 10:06:28.897220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:15.818 [2024-04-24 10:06:28.897266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.818 [2024-04-24 10:06:28.897279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:15.818 [2024-04-24 10:06:28.897325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:11:15.818 [2024-04-24 10:06:28.897339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:15.818 #28 NEW cov: 11642 ft: 13559 corp: 14/73b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 2 EraseBytes-InsertRepeatedBytes- 00:11:15.818 [2024-04-24 10:06:28.936684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a25 cdw11:00000000 00:11:15.818 [2024-04-24 10:06:28.936708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.818 #30 NEW cov: 11642 ft: 13637 corp: 15/75b lim: 10 exec/s: 0 rss: 69Mb L: 2/10 MS: 2 EraseBytes-InsertByte- 00:11:15.818 [2024-04-24 10:06:28.976800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:11:15.818 [2024-04-24 10:06:28.976825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.818 #31 NEW cov: 11642 ft: 13658 corp: 16/77b lim: 10 exec/s: 0 rss: 69Mb L: 2/10 MS: 1 CopyPart- 00:11:15.818 [2024-04-24 10:06:29.016949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:11:15.818 [2024-04-24 10:06:29.016974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.818 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:15.818 #32 NEW cov: 11665 ft: 13760 corp: 17/79b lim: 10 exec/s: 0 rss: 69Mb L: 2/10 MS: 1 ChangeByte- 00:11:15.818 [2024-04-24 10:06:29.057536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:11:15.818 [2024-04-24 10:06:29.057561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:15.818 [2024-04-24 10:06:29.057608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.818 [2024-04-24 10:06:29.057621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:15.818 [2024-04-24 10:06:29.057671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.818 [2024-04-24 10:06:29.057684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:15.818 [2024-04-24 10:06:29.057729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00004000 cdw11:00000000 00:11:15.818 [2024-04-24 10:06:29.057742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:15.818 [2024-04-24 10:06:29.057789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.818 [2024-04-24 10:06:29.057803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:15.818 #33 NEW cov: 11665 ft: 13805 corp: 18/89b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 ChangeBit- 00:11:16.077 [2024-04-24 10:06:29.097651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.097676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.097727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.097741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.097792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.097806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.097852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.097865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.097911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00003b0a cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.097925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.077 #34 NEW cov: 11665 ft: 13831 corp: 19/99b lim: 10 exec/s: 34 rss: 69Mb L: 10/10 MS: 1 ChangeByte- 00:11:16.077 [2024-04-24 10:06:29.137288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a64 cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.137313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.077 #35 NEW cov: 11665 ft: 13859 corp: 20/101b lim: 10 exec/s: 35 rss: 69Mb L: 2/10 MS: 1 ChangeByte- 00:11:16.077 [2024-04-24 10:06:29.177415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c70a cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.177440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.077 #36 NEW cov: 11665 ft: 13886 corp: 21/104b lim: 10 exec/s: 36 rss: 69Mb L: 3/10 MS: 1 InsertByte- 00:11:16.077 [2024-04-24 10:06:29.217940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fdff cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.217965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.218015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000aff cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.218028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.218078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.218093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.218140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.218155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.218201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.218214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.077 #37 NEW cov: 11665 ft: 13916 corp: 22/114b lim: 10 exec/s: 37 rss: 69Mb L: 10/10 MS: 1 InsertByte- 00:11:16.077 [2024-04-24 10:06:29.258084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff9e cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.258110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.258160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.258174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.258221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.258236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.258284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.258297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.258344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.258358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.077 #38 NEW cov: 11665 ft: 13927 corp: 23/124b lim: 10 exec/s: 38 rss: 69Mb L: 10/10 MS: 1 InsertByte- 00:11:16.077 [2024-04-24 10:06:29.298212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.298238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.298289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.298302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.298353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.298367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.298417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.298431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.298479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00003b0a cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.298493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.077 #39 NEW cov: 11665 ft: 13937 corp: 24/134b lim: 10 exec/s: 39 rss: 70Mb L: 10/10 MS: 1 ChangeBit- 00:11:16.077 [2024-04-24 10:06:29.338341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff9e cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.338367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.077 [2024-04-24 10:06:29.338419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.077 [2024-04-24 10:06:29.338432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.078 [2024-04-24 10:06:29.338482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.078 [2024-04-24 10:06:29.338501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.078 [2024-04-24 10:06:29.338549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffef cdw11:00000000 00:11:16.078 [2024-04-24 10:06:29.338561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.078 [2024-04-24 10:06:29.338609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:11:16.078 [2024-04-24 10:06:29.338623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.348 #40 NEW cov: 11665 ft: 13949 corp: 25/144b lim: 10 exec/s: 40 rss: 70Mb L: 10/10 MS: 1 ChangeBit- 00:11:16.348 [2024-04-24 10:06:29.378488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.378515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.378564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.378577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.378624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.378638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.378683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.378696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.378744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00003b0a cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.378758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.349 #41 NEW cov: 11665 ft: 14034 corp: 26/154b lim: 10 exec/s: 41 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:11:16.349 [2024-04-24 10:06:29.418289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fdff cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.418313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.418365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000aff cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.418378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.349 #42 NEW cov: 11665 ft: 14068 corp: 27/159b lim: 10 exec/s: 42 rss: 70Mb L: 5/10 MS: 1 EraseBytes- 00:11:16.349 [2024-04-24 10:06:29.458677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.458701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.458750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.458763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.458812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.458825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.458875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.458889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.458937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00003b0a cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.458951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.349 #43 NEW cov: 11665 ft: 14080 corp: 28/169b lim: 10 exec/s: 43 rss: 70Mb L: 10/10 MS: 1 ShuffleBytes- 00:11:16.349 [2024-04-24 10:06:29.498774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.498800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.498849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.498863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.498910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002500 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.498924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.498972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.498985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.499030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00003b0a cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.499045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.349 #44 NEW cov: 11665 ft: 14092 corp: 29/179b lim: 10 exec/s: 44 rss: 70Mb L: 10/10 MS: 1 ChangeByte- 00:11:16.349 [2024-04-24 10:06:29.538891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.538915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.538965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.538979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.539028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.539041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.539088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.539102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.539151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00003b0b cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.539165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.349 #45 NEW cov: 11665 ft: 14098 corp: 30/189b lim: 10 exec/s: 45 rss: 70Mb L: 10/10 MS: 1 ChangeBit- 00:11:16.349 [2024-04-24 10:06:29.579027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002efd cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.579052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.579107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.579122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.349 [2024-04-24 10:06:29.579171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.349 [2024-04-24 10:06:29.579185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.350 [2024-04-24 10:06:29.579234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.350 [2024-04-24 10:06:29.579247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.350 [2024-04-24 10:06:29.579295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:11:16.350 [2024-04-24 10:06:29.579309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.350 #46 NEW cov: 11665 ft: 14106 corp: 31/199b lim: 10 exec/s: 46 rss: 70Mb L: 10/10 MS: 1 InsertByte- 00:11:16.350 [2024-04-24 10:06:29.619099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:11:16.350 [2024-04-24 10:06:29.619125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.350 [2024-04-24 10:06:29.619176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.350 [2024-04-24 10:06:29.619189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.350 [2024-04-24 10:06:29.619238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.350 [2024-04-24 10:06:29.619251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.350 [2024-04-24 10:06:29.619300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.350 [2024-04-24 10:06:29.619313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.615 #47 NEW cov: 11665 ft: 14111 corp: 32/208b lim: 10 exec/s: 47 rss: 70Mb L: 9/10 MS: 1 EraseBytes- 00:11:16.615 [2024-04-24 10:06:29.659242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a46 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.659267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.659319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.659332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.659381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.659396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.659443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.659457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.659508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.659522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.615 #48 NEW cov: 11665 ft: 14123 corp: 33/218b lim: 10 exec/s: 48 rss: 70Mb L: 10/10 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:11:16.615 [2024-04-24 10:06:29.699387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.699412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.699462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.699475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.699526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.699540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.699587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.699600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.699650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.699664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.615 #49 NEW cov: 11665 ft: 14131 corp: 34/228b lim: 10 exec/s: 49 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:11:16.615 [2024-04-24 10:06:29.739406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000071ff cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.739432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.739482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.739495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.739545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.739558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.739610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.739623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.615 #54 NEW cov: 11665 ft: 14133 corp: 35/237b lim: 10 exec/s: 54 rss: 70Mb L: 9/10 MS: 5 CopyPart-ChangeByte-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:11:16.615 [2024-04-24 10:06:29.779645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.779670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.779719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.779733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.779783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.779797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.779847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003b00 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.779860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.615 [2024-04-24 10:06:29.779907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00003b0a cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.779921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.615 #55 NEW cov: 11665 ft: 14147 corp: 36/247b lim: 10 exec/s: 55 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:11:16.615 [2024-04-24 10:06:29.819298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c74a cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.819324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.615 #56 NEW cov: 11665 ft: 14238 corp: 37/250b lim: 10 exec/s: 56 rss: 70Mb L: 3/10 MS: 1 ChangeBit- 00:11:16.615 [2024-04-24 10:06:29.859398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a46 cdw11:00000000 00:11:16.615 [2024-04-24 10:06:29.859423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.615 #57 NEW cov: 11665 ft: 14270 corp: 38/253b lim: 10 exec/s: 57 rss: 70Mb L: 3/10 MS: 1 InsertByte- 00:11:16.873 [2024-04-24 10:06:29.899962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:11:16.873 [2024-04-24 10:06:29.899988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.873 [2024-04-24 10:06:29.900040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.873 [2024-04-24 10:06:29.900054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:29.900111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.874 [2024-04-24 10:06:29.900125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:29.900176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.874 [2024-04-24 10:06:29.900189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:29.900238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.874 [2024-04-24 10:06:29.900253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.874 #58 NEW cov: 11665 ft: 14274 corp: 39/263b lim: 10 exec/s: 58 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:11:16.874 [2024-04-24 10:06:29.940063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:11:16.874 [2024-04-24 10:06:29.940088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:29.940137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.874 [2024-04-24 10:06:29.940150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:29.940200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.874 [2024-04-24 10:06:29.940215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:29.940264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:11:16.874 [2024-04-24 10:06:29.940276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:29.940325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.874 [2024-04-24 10:06:29.940340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.874 #59 NEW cov: 11665 ft: 14286 corp: 40/273b lim: 10 exec/s: 59 rss: 70Mb L: 10/10 MS: 1 CrossOver- 00:11:16.874 [2024-04-24 10:06:29.979943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000025 cdw11:00000000 00:11:16.874 [2024-04-24 10:06:29.979970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:29.980022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.874 [2024-04-24 10:06:29.980037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:29.980092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000003b cdw11:00000000 00:11:16.874 [2024-04-24 10:06:29.980106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.874 #60 NEW cov: 11665 ft: 14366 corp: 41/280b lim: 10 exec/s: 60 rss: 70Mb L: 7/10 MS: 1 EraseBytes- 00:11:16.874 [2024-04-24 10:06:30.020097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:11:16.874 [2024-04-24 10:06:30.020130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:30.020186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.874 [2024-04-24 10:06:30.020201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:30.060156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:11:16.874 [2024-04-24 10:06:30.060187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:30.060239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7ff cdw11:00000000 00:11:16.874 [2024-04-24 10:06:30.060254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.874 #62 NEW cov: 11665 ft: 14384 corp: 42/285b lim: 10 exec/s: 62 rss: 70Mb L: 5/10 MS: 2 InsertRepeatedBytes-ChangeBit- 00:11:16.874 [2024-04-24 10:06:30.100549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fdff cdw11:00000000 00:11:16.874 [2024-04-24 10:06:30.100577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:30.100629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.874 [2024-04-24 10:06:30.100642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:30.100696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:16.874 [2024-04-24 10:06:30.100709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:30.100759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000aff cdw11:00000000 00:11:16.874 [2024-04-24 10:06:30.100772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:16.874 [2024-04-24 10:06:30.100823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:11:16.874 [2024-04-24 10:06:30.100836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:16.874 #63 NEW cov: 11665 ft: 14398 corp: 43/295b lim: 10 exec/s: 31 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:11:16.874 #63 DONE cov: 11665 ft: 14398 corp: 43/295b lim: 10 exec/s: 31 rss: 70Mb 00:11:16.874 ###### Recommended dictionary. ###### 00:11:16.874 "\001\000\000\000\000\000\000\000" # Uses: 1 00:11:16.874 ###### End of recommended dictionary. ###### 00:11:16.874 Done 63 runs in 2 second(s) 00:11:17.132 10:06:30 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_7.conf 00:11:17.132 10:06:30 -- ../common.sh@72 -- # (( i++ )) 00:11:17.132 10:06:30 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:17.132 10:06:30 -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:11:17.132 10:06:30 -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:11:17.132 10:06:30 -- nvmf/run.sh@24 -- # local timen=1 00:11:17.132 10:06:30 -- nvmf/run.sh@25 -- # local core=0x1 00:11:17.132 10:06:30 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:11:17.132 10:06:30 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:11:17.132 10:06:30 -- nvmf/run.sh@29 -- # printf %02d 8 00:11:17.132 10:06:30 -- nvmf/run.sh@29 -- # port=4408 00:11:17.132 10:06:30 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:11:17.132 10:06:30 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:11:17.132 10:06:30 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:17.132 10:06:30 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 -r /var/tmp/spdk8.sock 00:11:17.132 [2024-04-24 10:06:30.291679] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:17.132 [2024-04-24 10:06:30.291752] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170772 ] 00:11:17.132 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.390 [2024-04-24 10:06:30.575023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.390 [2024-04-24 10:06:30.668383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:17.391 [2024-04-24 10:06:30.668519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.649 [2024-04-24 10:06:30.726996] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.649 [2024-04-24 10:06:30.743224] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:11:17.649 INFO: Running with entropic power schedule (0xFF, 100). 00:11:17.649 INFO: Seed: 929821070 00:11:17.649 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:17.649 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:17.649 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:11:17.649 INFO: A corpus is not provided, starting from an empty corpus 00:11:17.649 [2024-04-24 10:06:30.798524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.649 [2024-04-24 10:06:30.798554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:17.649 #2 INITED cov: 11466 ft: 11467 corp: 1/1b exec/s: 0 rss: 66Mb 00:11:17.649 [2024-04-24 10:06:30.829007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.649 [2024-04-24 10:06:30.829034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:17.649 [2024-04-24 10:06:30.829094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.649 [2024-04-24 10:06:30.829108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:17.649 [2024-04-24 10:06:30.829161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.649 [2024-04-24 10:06:30.829176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:17.649 [2024-04-24 10:06:30.829228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.649 [2024-04-24 10:06:30.829242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:17.649 [2024-04-24 10:06:30.829294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.649 [2024-04-24 10:06:30.829309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:17.649 #3 NEW cov: 11579 ft: 12810 corp: 2/6b lim: 5 exec/s: 0 rss: 66Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:11:17.649 [2024-04-24 10:06:30.879170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.649 [2024-04-24 10:06:30.879195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:17.649 [2024-04-24 10:06:30.879249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.649 [2024-04-24 10:06:30.879264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:17.649 [2024-04-24 10:06:30.879315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.649 [2024-04-24 10:06:30.879328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:17.649 [2024-04-24 10:06:30.879380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.650 [2024-04-24 10:06:30.879393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:17.650 [2024-04-24 10:06:30.879446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.650 [2024-04-24 10:06:30.879460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:17.650 #4 NEW cov: 11585 ft: 12936 corp: 3/11b lim: 5 exec/s: 0 rss: 67Mb L: 5/5 MS: 1 ChangeBit- 00:11:17.650 [2024-04-24 10:06:30.919311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.650 [2024-04-24 10:06:30.919336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:17.650 [2024-04-24 10:06:30.919391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.650 [2024-04-24 10:06:30.919405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:17.650 [2024-04-24 10:06:30.919457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.650 [2024-04-24 10:06:30.919471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:17.650 [2024-04-24 10:06:30.919522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.650 [2024-04-24 10:06:30.919535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:17.650 [2024-04-24 10:06:30.919586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.650 [2024-04-24 10:06:30.919599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:17.907 #5 NEW cov: 11670 ft: 13135 corp: 4/16b lim: 5 exec/s: 0 rss: 67Mb L: 5/5 MS: 1 ChangeByte- 00:11:17.907 [2024-04-24 10:06:30.969403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:30.969428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:30.969482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:30.969496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:30.969549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:30.969563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:30.969615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:30.969628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:30.969679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:30.969692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:17.907 #6 NEW cov: 11670 ft: 13242 corp: 5/21b lim: 5 exec/s: 0 rss: 67Mb L: 5/5 MS: 1 ChangeBit- 00:11:17.907 [2024-04-24 10:06:31.009526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.009551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.009608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.009623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.009674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.009689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.009739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.009753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.009805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.009819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:17.907 #7 NEW cov: 11670 ft: 13398 corp: 6/26b lim: 5 exec/s: 0 rss: 67Mb L: 5/5 MS: 1 CMP- DE: "\377\377\377~"- 00:11:17.907 [2024-04-24 10:06:31.049597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.049622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.049676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.049690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.049742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.049755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.049809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.049822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.049874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.049888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:17.907 #8 NEW cov: 11670 ft: 13486 corp: 7/31b lim: 5 exec/s: 0 rss: 67Mb L: 5/5 MS: 1 ChangeByte- 00:11:17.907 [2024-04-24 10:06:31.089722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.089747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.089800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.089814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.089872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.089886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.089936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.089950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.090003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.090017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:17.907 #9 NEW cov: 11670 ft: 13515 corp: 8/36b lim: 5 exec/s: 0 rss: 67Mb L: 5/5 MS: 1 CrossOver- 00:11:17.907 [2024-04-24 10:06:31.129837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.129863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.129914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.129927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.129981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.129994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.130047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.130064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.130116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.130130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:17.907 #10 NEW cov: 11670 ft: 13600 corp: 9/41b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 CopyPart- 00:11:17.907 [2024-04-24 10:06:31.169692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.169718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.169772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.169786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:17.907 [2024-04-24 10:06:31.169841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:17.907 [2024-04-24 10:06:31.169856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.166 #11 NEW cov: 11670 ft: 13837 corp: 10/44b lim: 5 exec/s: 0 rss: 68Mb L: 3/5 MS: 1 EraseBytes- 00:11:18.166 [2024-04-24 10:06:31.210084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.210109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.210163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.210177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.210228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.210242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.210294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.210307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.210358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.210372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:18.166 #12 NEW cov: 11670 ft: 13869 corp: 11/49b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 ShuffleBytes- 00:11:18.166 [2024-04-24 10:06:31.249613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.249640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.166 #13 NEW cov: 11670 ft: 13899 corp: 12/50b lim: 5 exec/s: 0 rss: 68Mb L: 1/5 MS: 1 ShuffleBytes- 00:11:18.166 [2024-04-24 10:06:31.290305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.290330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.290383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.290396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.290448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.290462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.290512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.290525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.290577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.290591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:18.166 #14 NEW cov: 11670 ft: 13987 corp: 13/55b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 CopyPart- 00:11:18.166 [2024-04-24 10:06:31.340522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.340548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.340601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.340616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.340668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.340683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.340736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.340750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.340802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.340815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:18.166 #15 NEW cov: 11670 ft: 14008 corp: 14/60b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 CopyPart- 00:11:18.166 [2024-04-24 10:06:31.380641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.380668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.380727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.380743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.380800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.380815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.380871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.380886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.380940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.380955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:18.166 #16 NEW cov: 11670 ft: 14035 corp: 15/65b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 ShuffleBytes- 00:11:18.166 [2024-04-24 10:06:31.430431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.430460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.430515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.430529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.166 [2024-04-24 10:06:31.430579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.166 [2024-04-24 10:06:31.430593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.425 #17 NEW cov: 11670 ft: 14055 corp: 16/68b lim: 5 exec/s: 0 rss: 68Mb L: 3/5 MS: 1 EraseBytes- 00:11:18.425 [2024-04-24 10:06:31.470866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.470895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.470950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.470963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.471014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.471029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.471084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.471099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.471153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.471168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:18.425 #18 NEW cov: 11670 ft: 14117 corp: 17/73b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 ShuffleBytes- 00:11:18.425 [2024-04-24 10:06:31.510817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.510845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.510900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.510914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.510967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.510981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.511031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.511044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.425 #19 NEW cov: 11670 ft: 14140 corp: 18/77b lim: 5 exec/s: 0 rss: 68Mb L: 4/5 MS: 1 CrossOver- 00:11:18.425 [2024-04-24 10:06:31.551057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.551089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.551141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.551155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.551206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.551220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.551271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.551284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.551336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.551350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:18.425 #20 NEW cov: 11670 ft: 14150 corp: 19/82b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 ChangeBit- 00:11:18.425 [2024-04-24 10:06:31.591143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.591168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.591221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.591235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.591287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.591300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.591353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.591367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.591418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.591432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:18.425 #21 NEW cov: 11670 ft: 14172 corp: 20/87b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 ChangeBit- 00:11:18.425 [2024-04-24 10:06:31.631289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.631318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.631372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.631386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.631438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.631452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.425 [2024-04-24 10:06:31.631503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.425 [2024-04-24 10:06:31.631517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.426 [2024-04-24 10:06:31.631568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.426 [2024-04-24 10:06:31.631582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:18.426 #22 NEW cov: 11670 ft: 14179 corp: 21/92b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:11:18.426 [2024-04-24 10:06:31.671366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.426 [2024-04-24 10:06:31.671392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.426 [2024-04-24 10:06:31.671443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.426 [2024-04-24 10:06:31.671457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.426 [2024-04-24 10:06:31.671508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.426 [2024-04-24 10:06:31.671521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.426 [2024-04-24 10:06:31.671572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.426 [2024-04-24 10:06:31.671585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.426 [2024-04-24 10:06:31.671637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.426 [2024-04-24 10:06:31.671650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:18.684 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:18.684 #23 NEW cov: 11693 ft: 14207 corp: 22/97b lim: 5 exec/s: 23 rss: 69Mb L: 5/5 MS: 1 ChangeBinInt- 00:11:18.943 [2024-04-24 10:06:31.981831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:31.981876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.943 #24 NEW cov: 11693 ft: 14288 corp: 23/98b lim: 5 exec/s: 24 rss: 69Mb L: 1/5 MS: 1 ShuffleBytes- 00:11:18.943 [2024-04-24 10:06:32.022312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.022341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.943 [2024-04-24 10:06:32.022399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.022415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.943 [2024-04-24 10:06:32.022473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.022488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.943 [2024-04-24 10:06:32.022547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.022562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.943 #25 NEW cov: 11693 ft: 14313 corp: 24/102b lim: 5 exec/s: 25 rss: 69Mb L: 4/5 MS: 1 EraseBytes- 00:11:18.943 [2024-04-24 10:06:32.072609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.072635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.943 [2024-04-24 10:06:32.072692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.072707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.943 [2024-04-24 10:06:32.072763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.072778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.943 [2024-04-24 10:06:32.072832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.072845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.943 [2024-04-24 10:06:32.072902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.072916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:18.943 #26 NEW cov: 11693 ft: 14338 corp: 25/107b lim: 5 exec/s: 26 rss: 70Mb L: 5/5 MS: 1 ChangeByte- 00:11:18.943 [2024-04-24 10:06:32.122791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.122816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.943 [2024-04-24 10:06:32.122871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.122886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.943 [2024-04-24 10:06:32.122944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.122958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.943 [2024-04-24 10:06:32.123013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.123027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.943 [2024-04-24 10:06:32.123087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.123101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:18.943 #27 NEW cov: 11693 ft: 14361 corp: 26/112b lim: 5 exec/s: 27 rss: 70Mb L: 5/5 MS: 1 CrossOver- 00:11:18.943 [2024-04-24 10:06:32.172923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.172948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:18.943 [2024-04-24 10:06:32.173005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.173020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:18.943 [2024-04-24 10:06:32.173080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.943 [2024-04-24 10:06:32.173095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:18.944 [2024-04-24 10:06:32.173152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.944 [2024-04-24 10:06:32.173166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:18.944 [2024-04-24 10:06:32.173222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:18.944 [2024-04-24 10:06:32.173236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:18.944 #28 NEW cov: 11693 ft: 14379 corp: 27/117b lim: 5 exec/s: 28 rss: 70Mb L: 5/5 MS: 1 ShuffleBytes- 00:11:19.202 [2024-04-24 10:06:32.222872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.202 [2024-04-24 10:06:32.222899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:19.202 [2024-04-24 10:06:32.222958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.202 [2024-04-24 10:06:32.222973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:19.202 [2024-04-24 10:06:32.223031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.202 [2024-04-24 10:06:32.223046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:19.202 [2024-04-24 10:06:32.223107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.202 [2024-04-24 10:06:32.223124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:19.202 #29 NEW cov: 11693 ft: 14437 corp: 28/121b lim: 5 exec/s: 29 rss: 70Mb L: 4/5 MS: 1 ChangeBinInt- 00:11:19.202 [2024-04-24 10:06:32.273207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.202 [2024-04-24 10:06:32.273233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:19.202 [2024-04-24 10:06:32.273293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.202 [2024-04-24 10:06:32.273307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.273365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.273379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.273447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.273462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.273515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.273530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:19.203 #30 NEW cov: 11693 ft: 14463 corp: 29/126b lim: 5 exec/s: 30 rss: 70Mb L: 5/5 MS: 1 CMP- DE: "\000\000\0017"- 00:11:19.203 [2024-04-24 10:06:32.323027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.323053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.323117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.323132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.323189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.323203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:19.203 #31 NEW cov: 11693 ft: 14479 corp: 30/129b lim: 5 exec/s: 31 rss: 70Mb L: 3/5 MS: 1 EraseBytes- 00:11:19.203 [2024-04-24 10:06:32.363327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.363352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.363413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.363427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.363484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.363498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.363552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.363566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:19.203 #32 NEW cov: 11693 ft: 14532 corp: 31/133b lim: 5 exec/s: 32 rss: 70Mb L: 4/5 MS: 1 ShuffleBytes- 00:11:19.203 [2024-04-24 10:06:32.403608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.403634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.403692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.403707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.403764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.403778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.403835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.403849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.403905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.403919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:19.203 #33 NEW cov: 11693 ft: 14551 corp: 32/138b lim: 5 exec/s: 33 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:11:19.203 [2024-04-24 10:06:32.443652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.443679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.443737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.443751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.443806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.443821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.443876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.443890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:19.203 [2024-04-24 10:06:32.443950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.203 [2024-04-24 10:06:32.443965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:19.203 #34 NEW cov: 11693 ft: 14578 corp: 33/143b lim: 5 exec/s: 34 rss: 70Mb L: 5/5 MS: 1 ChangeBinInt- 00:11:19.462 [2024-04-24 10:06:32.493233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.493259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:19.462 #35 NEW cov: 11693 ft: 14643 corp: 34/144b lim: 5 exec/s: 35 rss: 70Mb L: 1/5 MS: 1 CrossOver- 00:11:19.462 [2024-04-24 10:06:32.533973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.533999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:19.462 [2024-04-24 10:06:32.534054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.534074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:19.462 [2024-04-24 10:06:32.534133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.534146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:19.462 [2024-04-24 10:06:32.534202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.534216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:19.462 [2024-04-24 10:06:32.534272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.534287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:19.462 #36 NEW cov: 11693 ft: 14745 corp: 35/149b lim: 5 exec/s: 36 rss: 70Mb L: 5/5 MS: 1 ShuffleBytes- 00:11:19.462 [2024-04-24 10:06:32.574083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.574110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:19.462 [2024-04-24 10:06:32.574170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.574185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:19.462 [2024-04-24 10:06:32.574240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.574255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:19.462 [2024-04-24 10:06:32.574313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.574326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:19.462 [2024-04-24 10:06:32.574387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.574402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:19.462 #37 NEW cov: 11693 ft: 14758 corp: 36/154b lim: 5 exec/s: 37 rss: 70Mb L: 5/5 MS: 1 ChangeByte- 00:11:19.462 [2024-04-24 10:06:32.624110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.624136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:19.462 [2024-04-24 10:06:32.624195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.624209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:19.462 [2024-04-24 10:06:32.624266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.624280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:19.462 [2024-04-24 10:06:32.624336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.624351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:19.462 #38 NEW cov: 11693 ft: 14764 corp: 37/158b lim: 5 exec/s: 38 rss: 70Mb L: 4/5 MS: 1 EraseBytes- 00:11:19.462 [2024-04-24 10:06:32.674453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.462 [2024-04-24 10:06:32.674480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:19.463 [2024-04-24 10:06:32.674538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.463 [2024-04-24 10:06:32.674554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:19.463 [2024-04-24 10:06:32.674611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.463 [2024-04-24 10:06:32.674626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:19.463 [2024-04-24 10:06:32.674682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.463 [2024-04-24 10:06:32.674696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:19.463 [2024-04-24 10:06:32.674753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.463 [2024-04-24 10:06:32.674767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:19.463 #39 NEW cov: 11693 ft: 14775 corp: 38/163b lim: 5 exec/s: 39 rss: 70Mb L: 5/5 MS: 1 ChangeByte- 00:11:19.463 [2024-04-24 10:06:32.714206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.463 [2024-04-24 10:06:32.714237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:19.463 [2024-04-24 10:06:32.714295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.463 [2024-04-24 10:06:32.714309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:19.463 [2024-04-24 10:06:32.714364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.463 [2024-04-24 10:06:32.714378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:19.722 #40 NEW cov: 11693 ft: 14838 corp: 39/166b lim: 5 exec/s: 40 rss: 71Mb L: 3/5 MS: 1 EraseBytes- 00:11:19.722 [2024-04-24 10:06:32.764504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.722 [2024-04-24 10:06:32.764531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:19.722 [2024-04-24 10:06:32.764590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.722 [2024-04-24 10:06:32.764606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:19.722 [2024-04-24 10:06:32.764656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.722 [2024-04-24 10:06:32.764669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:19.722 [2024-04-24 10:06:32.764724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:19.722 [2024-04-24 10:06:32.764738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:19.722 #41 NEW cov: 11693 ft: 14850 corp: 40/170b lim: 5 exec/s: 20 rss: 71Mb L: 4/5 MS: 1 ChangeBit- 00:11:19.722 #41 DONE cov: 11693 ft: 14850 corp: 40/170b lim: 5 exec/s: 20 rss: 71Mb 00:11:19.722 ###### Recommended dictionary. ###### 00:11:19.722 "\377\377\377~" # Uses: 0 00:11:19.722 "\000\000\0017" # Uses: 0 00:11:19.722 ###### End of recommended dictionary. ###### 00:11:19.722 Done 41 runs in 2 second(s) 00:11:19.722 10:06:32 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_8.conf 00:11:19.722 10:06:32 -- ../common.sh@72 -- # (( i++ )) 00:11:19.722 10:06:32 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:19.722 10:06:32 -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:11:19.722 10:06:32 -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:11:19.722 10:06:32 -- nvmf/run.sh@24 -- # local timen=1 00:11:19.722 10:06:32 -- nvmf/run.sh@25 -- # local core=0x1 00:11:19.722 10:06:32 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:11:19.722 10:06:32 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:11:19.722 10:06:32 -- nvmf/run.sh@29 -- # printf %02d 9 00:11:19.722 10:06:32 -- nvmf/run.sh@29 -- # port=4409 00:11:19.722 10:06:32 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:11:19.722 10:06:32 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:11:19.722 10:06:32 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:19.722 10:06:32 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 -r /var/tmp/spdk9.sock 00:11:19.722 [2024-04-24 10:06:32.974106] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:19.722 [2024-04-24 10:06:32.974182] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1171146 ] 00:11:19.981 EAL: No free 2048 kB hugepages reported on node 1 00:11:20.240 [2024-04-24 10:06:33.285453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.240 [2024-04-24 10:06:33.378076] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:20.240 [2024-04-24 10:06:33.378208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.240 [2024-04-24 10:06:33.436880] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.240 [2024-04-24 10:06:33.453095] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:11:20.240 INFO: Running with entropic power schedule (0xFF, 100). 00:11:20.240 INFO: Seed: 3640820508 00:11:20.240 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:20.240 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:20.240 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:11:20.240 INFO: A corpus is not provided, starting from an empty corpus 00:11:20.240 [2024-04-24 10:06:33.508436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:20.240 [2024-04-24 10:06:33.508468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:20.498 #2 INITED cov: 11465 ft: 11466 corp: 1/1b exec/s: 0 rss: 66Mb 00:11:20.498 [2024-04-24 10:06:33.538375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:20.498 [2024-04-24 10:06:33.538402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:20.757 NEW_FUNC[1/1]: 0xeb6870 in spdk_process_is_primary /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:290 00:11:20.757 #3 NEW cov: 11579 ft: 11946 corp: 2/2b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 CopyPart- 00:11:20.757 [2024-04-24 10:06:33.849355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:20.757 [2024-04-24 10:06:33.849398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:20.757 #4 NEW cov: 11585 ft: 12147 corp: 3/3b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 ChangeByte- 00:11:20.757 [2024-04-24 10:06:33.889315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:20.757 [2024-04-24 10:06:33.889342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:20.757 #5 NEW cov: 11670 ft: 12508 corp: 4/4b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 ChangeBit- 00:11:20.757 [2024-04-24 10:06:33.929900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:20.757 [2024-04-24 10:06:33.929929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:20.757 [2024-04-24 10:06:33.929984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:20.757 [2024-04-24 10:06:33.929998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:20.757 [2024-04-24 10:06:33.930052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:20.757 [2024-04-24 10:06:33.930073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:20.757 [2024-04-24 10:06:33.930127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:20.757 [2024-04-24 10:06:33.930140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:20.757 #6 NEW cov: 11670 ft: 13375 corp: 5/8b lim: 5 exec/s: 0 rss: 68Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:11:20.757 [2024-04-24 10:06:33.969708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:20.757 [2024-04-24 10:06:33.969734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:20.757 [2024-04-24 10:06:33.969789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:20.757 [2024-04-24 10:06:33.969803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:20.757 #7 NEW cov: 11670 ft: 13613 corp: 6/10b lim: 5 exec/s: 0 rss: 68Mb L: 2/4 MS: 1 InsertByte- 00:11:20.757 [2024-04-24 10:06:34.009991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:20.757 [2024-04-24 10:06:34.010018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:20.757 [2024-04-24 10:06:34.010078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:20.757 [2024-04-24 10:06:34.010093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:20.757 [2024-04-24 10:06:34.010148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:20.757 [2024-04-24 10:06:34.010163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:20.757 #8 NEW cov: 11670 ft: 13849 corp: 7/13b lim: 5 exec/s: 0 rss: 68Mb L: 3/4 MS: 1 CMP- DE: "\016\000"- 00:11:21.016 [2024-04-24 10:06:34.049929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.016 [2024-04-24 10:06:34.049956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.016 [2024-04-24 10:06:34.050009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.016 [2024-04-24 10:06:34.050022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.016 #9 NEW cov: 11670 ft: 13886 corp: 8/15b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 EraseBytes- 00:11:21.016 [2024-04-24 10:06:34.089861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.016 [2024-04-24 10:06:34.089887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.016 #10 NEW cov: 11670 ft: 13986 corp: 9/16b lim: 5 exec/s: 0 rss: 69Mb L: 1/4 MS: 1 ShuffleBytes- 00:11:21.016 [2024-04-24 10:06:34.129988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.016 [2024-04-24 10:06:34.130018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.016 #11 NEW cov: 11670 ft: 14052 corp: 10/17b lim: 5 exec/s: 0 rss: 69Mb L: 1/4 MS: 1 ChangeBit- 00:11:21.016 [2024-04-24 10:06:34.170442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.016 [2024-04-24 10:06:34.170469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.016 [2024-04-24 10:06:34.170525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.016 [2024-04-24 10:06:34.170540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.016 [2024-04-24 10:06:34.170595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.016 [2024-04-24 10:06:34.170609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:21.016 #12 NEW cov: 11670 ft: 14097 corp: 11/20b lim: 5 exec/s: 0 rss: 69Mb L: 3/4 MS: 1 CrossOver- 00:11:21.016 [2024-04-24 10:06:34.210394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.016 [2024-04-24 10:06:34.210419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.016 [2024-04-24 10:06:34.210476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.016 [2024-04-24 10:06:34.210490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.016 #13 NEW cov: 11670 ft: 14154 corp: 12/22b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 CrossOver- 00:11:21.016 [2024-04-24 10:06:34.250544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.016 [2024-04-24 10:06:34.250570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.016 [2024-04-24 10:06:34.250626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.016 [2024-04-24 10:06:34.250640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.016 #14 NEW cov: 11670 ft: 14263 corp: 13/24b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 ChangeBit- 00:11:21.275 [2024-04-24 10:06:34.300527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.275 [2024-04-24 10:06:34.300553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.275 #15 NEW cov: 11670 ft: 14334 corp: 14/25b lim: 5 exec/s: 0 rss: 69Mb L: 1/4 MS: 1 ShuffleBytes- 00:11:21.275 [2024-04-24 10:06:34.341086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.275 [2024-04-24 10:06:34.341112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.275 [2024-04-24 10:06:34.341167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.275 [2024-04-24 10:06:34.341184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.275 [2024-04-24 10:06:34.341236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.275 [2024-04-24 10:06:34.341250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:21.275 [2024-04-24 10:06:34.341303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.275 [2024-04-24 10:06:34.341317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:21.275 #16 NEW cov: 11670 ft: 14346 corp: 15/29b lim: 5 exec/s: 0 rss: 69Mb L: 4/4 MS: 1 CopyPart- 00:11:21.275 [2024-04-24 10:06:34.381363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.275 [2024-04-24 10:06:34.381388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.275 [2024-04-24 10:06:34.381443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.275 [2024-04-24 10:06:34.381457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.275 [2024-04-24 10:06:34.381511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.381525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:21.276 [2024-04-24 10:06:34.381578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.381592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:21.276 [2024-04-24 10:06:34.381645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.381659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:21.276 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:21.276 #17 NEW cov: 11693 ft: 14419 corp: 16/34b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 InsertByte- 00:11:21.276 [2024-04-24 10:06:34.431217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.431243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.276 [2024-04-24 10:06:34.431297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.431311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.276 [2024-04-24 10:06:34.431366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.431381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:21.276 #18 NEW cov: 11693 ft: 14457 corp: 17/37b lim: 5 exec/s: 0 rss: 69Mb L: 3/5 MS: 1 EraseBytes- 00:11:21.276 [2024-04-24 10:06:34.471301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.471328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.276 [2024-04-24 10:06:34.471385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.471399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.276 [2024-04-24 10:06:34.471450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.471465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:21.276 #19 NEW cov: 11693 ft: 14471 corp: 18/40b lim: 5 exec/s: 19 rss: 69Mb L: 3/5 MS: 1 ChangeBinInt- 00:11:21.276 [2024-04-24 10:06:34.511122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.511148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.276 #20 NEW cov: 11693 ft: 14539 corp: 19/41b lim: 5 exec/s: 20 rss: 69Mb L: 1/5 MS: 1 ShuffleBytes- 00:11:21.276 [2024-04-24 10:06:34.551739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.551765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.276 [2024-04-24 10:06:34.551821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.551836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.276 [2024-04-24 10:06:34.551886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.551901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:21.276 [2024-04-24 10:06:34.551952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.276 [2024-04-24 10:06:34.551965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:21.534 #21 NEW cov: 11693 ft: 14569 corp: 20/45b lim: 5 exec/s: 21 rss: 69Mb L: 4/5 MS: 1 CrossOver- 00:11:21.535 [2024-04-24 10:06:34.601391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.601417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.535 #22 NEW cov: 11693 ft: 14649 corp: 21/46b lim: 5 exec/s: 22 rss: 69Mb L: 1/5 MS: 1 CopyPart- 00:11:21.535 [2024-04-24 10:06:34.641487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.641512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.535 #23 NEW cov: 11693 ft: 14664 corp: 22/47b lim: 5 exec/s: 23 rss: 69Mb L: 1/5 MS: 1 ChangeByte- 00:11:21.535 [2024-04-24 10:06:34.682053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.682087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.535 [2024-04-24 10:06:34.682146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.682171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.535 [2024-04-24 10:06:34.682224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.682238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:21.535 [2024-04-24 10:06:34.682292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.682305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:21.535 #24 NEW cov: 11693 ft: 14677 corp: 23/51b lim: 5 exec/s: 24 rss: 69Mb L: 4/5 MS: 1 CopyPart- 00:11:21.535 [2024-04-24 10:06:34.731736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.731761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.535 #25 NEW cov: 11693 ft: 14725 corp: 24/52b lim: 5 exec/s: 25 rss: 69Mb L: 1/5 MS: 1 CopyPart- 00:11:21.535 [2024-04-24 10:06:34.772502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.772527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.535 [2024-04-24 10:06:34.772581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.772594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.535 [2024-04-24 10:06:34.772649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.772662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:21.535 [2024-04-24 10:06:34.772716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.772729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:21.535 [2024-04-24 10:06:34.772783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.772797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:21.535 #26 NEW cov: 11693 ft: 14746 corp: 25/57b lim: 5 exec/s: 26 rss: 69Mb L: 5/5 MS: 1 InsertByte- 00:11:21.535 [2024-04-24 10:06:34.812170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.812196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.535 [2024-04-24 10:06:34.812250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.535 [2024-04-24 10:06:34.812268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.794 #27 NEW cov: 11693 ft: 14778 corp: 26/59b lim: 5 exec/s: 27 rss: 69Mb L: 2/5 MS: 1 EraseBytes- 00:11:21.794 [2024-04-24 10:06:34.852730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:34.852755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.794 [2024-04-24 10:06:34.852811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:34.852826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.794 [2024-04-24 10:06:34.852880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:34.852894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:21.794 [2024-04-24 10:06:34.852945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:34.852958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:21.794 [2024-04-24 10:06:34.853011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:34.853025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:21.794 #28 NEW cov: 11693 ft: 14807 corp: 27/64b lim: 5 exec/s: 28 rss: 70Mb L: 5/5 MS: 1 ShuffleBytes- 00:11:21.794 [2024-04-24 10:06:34.902234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:34.902258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.794 #29 NEW cov: 11693 ft: 14829 corp: 28/65b lim: 5 exec/s: 29 rss: 70Mb L: 1/5 MS: 1 ChangeBit- 00:11:21.794 [2024-04-24 10:06:34.942800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:34.942825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.794 [2024-04-24 10:06:34.942879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:34.942893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.794 [2024-04-24 10:06:34.942947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:34.942961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:21.794 [2024-04-24 10:06:34.943012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:34.943025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:21.794 #30 NEW cov: 11693 ft: 14840 corp: 29/69b lim: 5 exec/s: 30 rss: 70Mb L: 4/5 MS: 1 ChangeByte- 00:11:21.794 [2024-04-24 10:06:34.982608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:34.982635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.794 [2024-04-24 10:06:34.982690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:34.982706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:21.794 #31 NEW cov: 11693 ft: 14858 corp: 30/71b lim: 5 exec/s: 31 rss: 70Mb L: 2/5 MS: 1 ChangeByte- 00:11:21.794 [2024-04-24 10:06:35.022586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:35.022612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:21.794 #32 NEW cov: 11693 ft: 14864 corp: 31/72b lim: 5 exec/s: 32 rss: 70Mb L: 1/5 MS: 1 ChangeByte- 00:11:21.794 [2024-04-24 10:06:35.062687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:21.794 [2024-04-24 10:06:35.062712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:22.053 #33 NEW cov: 11693 ft: 14887 corp: 32/73b lim: 5 exec/s: 33 rss: 70Mb L: 1/5 MS: 1 CrossOver- 00:11:22.053 [2024-04-24 10:06:35.103434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.053 [2024-04-24 10:06:35.103460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:22.053 [2024-04-24 10:06:35.103517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.053 [2024-04-24 10:06:35.103531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:22.053 [2024-04-24 10:06:35.103584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.053 [2024-04-24 10:06:35.103599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:22.053 [2024-04-24 10:06:35.103650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.053 [2024-04-24 10:06:35.103663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:22.053 [2024-04-24 10:06:35.103715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.054 [2024-04-24 10:06:35.103728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:22.054 #34 NEW cov: 11693 ft: 14889 corp: 33/78b lim: 5 exec/s: 34 rss: 70Mb L: 5/5 MS: 1 ChangeASCIIInt- 00:11:22.054 [2024-04-24 10:06:35.142961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.054 [2024-04-24 10:06:35.142986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:22.054 #35 NEW cov: 11693 ft: 14899 corp: 34/79b lim: 5 exec/s: 35 rss: 70Mb L: 1/5 MS: 1 CrossOver- 00:11:22.054 [2024-04-24 10:06:35.183361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.054 [2024-04-24 10:06:35.183386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:22.054 [2024-04-24 10:06:35.183441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.054 [2024-04-24 10:06:35.183455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:22.054 [2024-04-24 10:06:35.183509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.054 [2024-04-24 10:06:35.183523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:22.054 #36 NEW cov: 11693 ft: 14904 corp: 35/82b lim: 5 exec/s: 36 rss: 70Mb L: 3/5 MS: 1 PersAutoDict- DE: "\016\000"- 00:11:22.054 [2024-04-24 10:06:35.223220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.054 [2024-04-24 10:06:35.223245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:22.054 #37 NEW cov: 11693 ft: 14910 corp: 36/83b lim: 5 exec/s: 37 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:11:22.054 [2024-04-24 10:06:35.263618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.054 [2024-04-24 10:06:35.263643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:22.054 [2024-04-24 10:06:35.263699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.054 [2024-04-24 10:06:35.263713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:22.054 [2024-04-24 10:06:35.263766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.054 [2024-04-24 10:06:35.263780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:22.054 #38 NEW cov: 11693 ft: 14914 corp: 37/86b lim: 5 exec/s: 38 rss: 70Mb L: 3/5 MS: 1 EraseBytes- 00:11:22.054 [2024-04-24 10:06:35.303884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.054 [2024-04-24 10:06:35.303910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:22.054 [2024-04-24 10:06:35.303966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.054 [2024-04-24 10:06:35.303981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:22.054 [2024-04-24 10:06:35.304045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.054 [2024-04-24 10:06:35.304063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:22.054 [2024-04-24 10:06:35.304117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.054 [2024-04-24 10:06:35.304133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:22.054 #39 NEW cov: 11693 ft: 14920 corp: 38/90b lim: 5 exec/s: 39 rss: 70Mb L: 4/5 MS: 1 CrossOver- 00:11:22.313 [2024-04-24 10:06:35.343972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.343999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:22.313 [2024-04-24 10:06:35.344055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.344075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:22.313 [2024-04-24 10:06:35.344131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.344146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:22.313 [2024-04-24 10:06:35.344201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.344214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:22.313 #40 NEW cov: 11693 ft: 14932 corp: 39/94b lim: 5 exec/s: 40 rss: 70Mb L: 4/5 MS: 1 PersAutoDict- DE: "\016\000"- 00:11:22.313 [2024-04-24 10:06:35.383676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.383703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:22.313 #41 NEW cov: 11693 ft: 14936 corp: 40/95b lim: 5 exec/s: 41 rss: 70Mb L: 1/5 MS: 1 CrossOver- 00:11:22.313 [2024-04-24 10:06:35.424438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.424464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:22.313 [2024-04-24 10:06:35.424520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.424534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:22.313 [2024-04-24 10:06:35.424589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.424602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:22.313 [2024-04-24 10:06:35.424657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.424670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:22.313 [2024-04-24 10:06:35.424722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.424736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:22.313 #42 NEW cov: 11693 ft: 14949 corp: 41/100b lim: 5 exec/s: 42 rss: 70Mb L: 5/5 MS: 1 CrossOver- 00:11:22.313 [2024-04-24 10:06:35.474560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.474590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:22.313 [2024-04-24 10:06:35.474645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.474659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:22.313 [2024-04-24 10:06:35.474714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.474728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:22.313 [2024-04-24 10:06:35.474781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.474794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:22.313 [2024-04-24 10:06:35.474847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:22.313 [2024-04-24 10:06:35.474861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:22.314 #43 NEW cov: 11693 ft: 14961 corp: 42/105b lim: 5 exec/s: 21 rss: 70Mb L: 5/5 MS: 1 ChangeByte- 00:11:22.314 #43 DONE cov: 11693 ft: 14961 corp: 42/105b lim: 5 exec/s: 21 rss: 70Mb 00:11:22.314 ###### Recommended dictionary. ###### 00:11:22.314 "\016\000" # Uses: 2 00:11:22.314 ###### End of recommended dictionary. ###### 00:11:22.314 Done 43 runs in 2 second(s) 00:11:22.573 10:06:35 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_9.conf 00:11:22.573 10:06:35 -- ../common.sh@72 -- # (( i++ )) 00:11:22.573 10:06:35 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:22.573 10:06:35 -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:11:22.573 10:06:35 -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:11:22.573 10:06:35 -- nvmf/run.sh@24 -- # local timen=1 00:11:22.573 10:06:35 -- nvmf/run.sh@25 -- # local core=0x1 00:11:22.573 10:06:35 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:11:22.573 10:06:35 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:11:22.573 10:06:35 -- nvmf/run.sh@29 -- # printf %02d 10 00:11:22.573 10:06:35 -- nvmf/run.sh@29 -- # port=4410 00:11:22.573 10:06:35 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:11:22.573 10:06:35 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:11:22.573 10:06:35 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:22.573 10:06:35 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 -r /var/tmp/spdk10.sock 00:11:22.573 [2024-04-24 10:06:35.677951] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:22.574 [2024-04-24 10:06:35.678030] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1171507 ] 00:11:22.574 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.832 [2024-04-24 10:06:35.992386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.832 [2024-04-24 10:06:36.084401] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:22.832 [2024-04-24 10:06:36.084530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.090 [2024-04-24 10:06:36.143155] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.090 [2024-04-24 10:06:36.159361] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:11:23.090 INFO: Running with entropic power schedule (0xFF, 100). 00:11:23.090 INFO: Seed: 2051838714 00:11:23.090 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:23.090 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:23.090 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:11:23.090 INFO: A corpus is not provided, starting from an empty corpus 00:11:23.090 #2 INITED exec/s: 0 rss: 60Mb 00:11:23.090 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:23.090 This may also happen if the target rejected all inputs we tried so far 00:11:23.090 [2024-04-24 10:06:36.204223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.090 [2024-04-24 10:06:36.204259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:23.090 [2024-04-24 10:06:36.204295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.090 [2024-04-24 10:06:36.204312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:23.090 [2024-04-24 10:06:36.204344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.090 [2024-04-24 10:06:36.204361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:23.348 NEW_FUNC[1/663]: 0x48c9d0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:11:23.348 NEW_FUNC[2/663]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:23.348 #9 NEW cov: 11489 ft: 11490 corp: 2/31b lim: 40 exec/s: 0 rss: 68Mb L: 30/30 MS: 2 ChangeByte-InsertRepeatedBytes- 00:11:23.348 [2024-04-24 10:06:36.567458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.348 [2024-04-24 10:06:36.567496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:23.348 [2024-04-24 10:06:36.567586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.348 [2024-04-24 10:06:36.567602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:23.348 [2024-04-24 10:06:36.567693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.348 [2024-04-24 10:06:36.567709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:23.348 #10 NEW cov: 11602 ft: 12011 corp: 3/61b lim: 40 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 ShuffleBytes- 00:11:23.606 [2024-04-24 10:06:36.627598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.627626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:23.607 [2024-04-24 10:06:36.627721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.627738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:23.607 [2024-04-24 10:06:36.627826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15e8eaea cdw11:ea151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.627840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:23.607 #11 NEW cov: 11608 ft: 12221 corp: 4/91b lim: 40 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 ChangeBinInt- 00:11:23.607 [2024-04-24 10:06:36.677717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.677742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:23.607 [2024-04-24 10:06:36.677835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.677849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:23.607 [2024-04-24 10:06:36.677938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.677952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:23.607 #12 NEW cov: 11693 ft: 12512 corp: 5/121b lim: 40 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 CopyPart- 00:11:23.607 [2024-04-24 10:06:36.738183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.738208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:23.607 [2024-04-24 10:06:36.738308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.738323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:23.607 [2024-04-24 10:06:36.738416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15e8eaea cdw11:ea150c0c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.738432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:23.607 [2024-04-24 10:06:36.738522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:0c0c0c15 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.738536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:23.607 #13 NEW cov: 11693 ft: 13008 corp: 6/156b lim: 40 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:11:23.607 [2024-04-24 10:06:36.787557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.787583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:23.607 #14 NEW cov: 11693 ft: 13421 corp: 7/171b lim: 40 exec/s: 0 rss: 68Mb L: 15/35 MS: 1 InsertRepeatedBytes- 00:11:23.607 [2024-04-24 10:06:36.838375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15155515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.838402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:23.607 [2024-04-24 10:06:36.838491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.838508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:23.607 [2024-04-24 10:06:36.838593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15e8eaea cdw11:ea151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.607 [2024-04-24 10:06:36.838608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:23.607 #15 NEW cov: 11693 ft: 13610 corp: 8/201b lim: 40 exec/s: 0 rss: 68Mb L: 30/35 MS: 1 ChangeBit- 00:11:23.866 [2024-04-24 10:06:36.888683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:36.888709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:23.866 [2024-04-24 10:06:36.888792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:36.888807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:23.866 [2024-04-24 10:06:36.888897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:154b4b4b cdw11:4b151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:36.888911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:23.866 [2024-04-24 10:06:36.888997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:36.889010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:23.866 #16 NEW cov: 11693 ft: 13696 corp: 9/235b lim: 40 exec/s: 0 rss: 68Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:11:23.866 [2024-04-24 10:06:36.938584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:36.938609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:23.866 [2024-04-24 10:06:36.938702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:36.938717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:23.866 [2024-04-24 10:06:36.938813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15e8eaea cdw11:ea151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:36.938827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:23.866 #17 NEW cov: 11693 ft: 13757 corp: 10/265b lim: 40 exec/s: 0 rss: 68Mb L: 30/35 MS: 1 ChangeBinInt- 00:11:23.866 [2024-04-24 10:06:36.988763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:36.988787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:23.866 [2024-04-24 10:06:36.988881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:36.988896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:23.866 [2024-04-24 10:06:36.988989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15e8eaea cdw11:ea151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:36.989003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:23.866 #18 NEW cov: 11693 ft: 13863 corp: 11/295b lim: 40 exec/s: 0 rss: 68Mb L: 30/35 MS: 1 ShuffleBytes- 00:11:23.866 [2024-04-24 10:06:37.038502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:37.038528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:23.866 #19 NEW cov: 11693 ft: 13893 corp: 12/307b lim: 40 exec/s: 0 rss: 68Mb L: 12/35 MS: 1 EraseBytes- 00:11:23.866 [2024-04-24 10:06:37.089467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15155515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:37.089491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:23.866 [2024-04-24 10:06:37.089580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:37.089595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:23.866 [2024-04-24 10:06:37.089697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:2c2c2c2c cdw11:2c2c15e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:37.089712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:23.866 [2024-04-24 10:06:37.089807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:eaeaea15 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:23.866 [2024-04-24 10:06:37.089822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:23.866 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:23.866 #20 NEW cov: 11710 ft: 13940 corp: 13/343b lim: 40 exec/s: 0 rss: 69Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:11:24.125 [2024-04-24 10:06:37.149365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.149390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.149483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.149496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.149587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:151515ea SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.149602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.125 #21 NEW cov: 11710 ft: 13953 corp: 14/373b lim: 40 exec/s: 0 rss: 69Mb L: 30/36 MS: 1 CrossOver- 00:11:24.125 [2024-04-24 10:06:37.199516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15158a15 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.199539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.199634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.199648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.199736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15e8eaea cdw11:ea151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.199749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.125 #22 NEW cov: 11710 ft: 13972 corp: 15/403b lim: 40 exec/s: 22 rss: 69Mb L: 30/36 MS: 1 ChangeByte- 00:11:24.125 [2024-04-24 10:06:37.249675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15155515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.249700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.249781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.249795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.249871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:8a151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.249886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.125 #23 NEW cov: 11710 ft: 14016 corp: 16/427b lim: 40 exec/s: 23 rss: 69Mb L: 24/36 MS: 1 CrossOver- 00:11:24.125 [2024-04-24 10:06:37.300420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.300444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.300526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.300540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.300625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.300638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.300724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:151515e8 cdw11:eaeaea15 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.300737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.300824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:15151515 cdw11:1515151a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.300838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:24.125 #24 NEW cov: 11710 ft: 14105 corp: 17/467b lim: 40 exec/s: 24 rss: 69Mb L: 40/40 MS: 1 CrossOver- 00:11:24.125 [2024-04-24 10:06:37.349998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:151515bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.350023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.350108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.350123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.350224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.350238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.125 #25 NEW cov: 11710 ft: 14117 corp: 18/498b lim: 40 exec/s: 25 rss: 69Mb L: 31/40 MS: 1 InsertByte- 00:11:24.125 [2024-04-24 10:06:37.400264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.400288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.400378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.400394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.125 [2024-04-24 10:06:37.400478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15e8eaea cdw11:ea151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.125 [2024-04-24 10:06:37.400493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.385 #26 NEW cov: 11710 ft: 14132 corp: 19/529b lim: 40 exec/s: 26 rss: 69Mb L: 31/40 MS: 1 InsertByte- 00:11:24.385 [2024-04-24 10:06:37.450313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151e15 cdw11:15155515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.450337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.385 [2024-04-24 10:06:37.450420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.450436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.385 [2024-04-24 10:06:37.450527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15e8eaea cdw11:ea151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.450542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.385 #27 NEW cov: 11710 ft: 14144 corp: 20/559b lim: 40 exec/s: 27 rss: 69Mb L: 30/40 MS: 1 ChangeBinInt- 00:11:24.385 [2024-04-24 10:06:37.500948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.500974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.385 [2024-04-24 10:06:37.501064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.501081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.385 [2024-04-24 10:06:37.501172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:1515e8ea SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.501190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.385 [2024-04-24 10:06:37.501281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:eaea1515 cdw11:15151715 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.501295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:24.385 #28 NEW cov: 11710 ft: 14168 corp: 21/595b lim: 40 exec/s: 28 rss: 69Mb L: 36/40 MS: 1 CopyPart- 00:11:24.385 [2024-04-24 10:06:37.560524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.560551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.385 [2024-04-24 10:06:37.560638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e8eaeaea cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.560656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.385 #30 NEW cov: 11710 ft: 14352 corp: 22/615b lim: 40 exec/s: 30 rss: 69Mb L: 20/40 MS: 2 ChangeByte-CrossOver- 00:11:24.385 [2024-04-24 10:06:37.610791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:eaeaea15 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.610817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.385 [2024-04-24 10:06:37.610910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:1515151a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.610925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.385 #31 NEW cov: 11710 ft: 14367 corp: 23/631b lim: 40 exec/s: 31 rss: 69Mb L: 16/40 MS: 1 EraseBytes- 00:11:24.385 [2024-04-24 10:06:37.661253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:eb151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.661278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.385 [2024-04-24 10:06:37.661379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.661393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.385 [2024-04-24 10:06:37.661481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.385 [2024-04-24 10:06:37.661494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.644 #32 NEW cov: 11710 ft: 14374 corp: 24/662b lim: 40 exec/s: 32 rss: 69Mb L: 31/40 MS: 1 InsertByte- 00:11:24.644 [2024-04-24 10:06:37.710815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a9090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.644 [2024-04-24 10:06:37.710840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.644 #33 NEW cov: 11710 ft: 14390 corp: 25/675b lim: 40 exec/s: 33 rss: 69Mb L: 13/40 MS: 1 CrossOver- 00:11:24.644 [2024-04-24 10:06:37.761499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1515e7ea cdw11:eaea1515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.644 [2024-04-24 10:06:37.761524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.644 [2024-04-24 10:06:37.761612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.644 [2024-04-24 10:06:37.761626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.644 [2024-04-24 10:06:37.761712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.644 [2024-04-24 10:06:37.761728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.644 #34 NEW cov: 11710 ft: 14426 corp: 26/705b lim: 40 exec/s: 34 rss: 69Mb L: 30/40 MS: 1 ChangeBinInt- 00:11:24.644 [2024-04-24 10:06:37.821724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:95155515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.644 [2024-04-24 10:06:37.821747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.644 [2024-04-24 10:06:37.821831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.644 [2024-04-24 10:06:37.821845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.644 [2024-04-24 10:06:37.821928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:8a151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.644 [2024-04-24 10:06:37.821941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.644 #35 NEW cov: 11710 ft: 14473 corp: 27/729b lim: 40 exec/s: 35 rss: 69Mb L: 24/40 MS: 1 ChangeBit- 00:11:24.644 [2024-04-24 10:06:37.871956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:1515a415 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.644 [2024-04-24 10:06:37.871982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.644 [2024-04-24 10:06:37.872086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.644 [2024-04-24 10:06:37.872101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.644 [2024-04-24 10:06:37.872183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15e8eaea cdw11:ea151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.644 [2024-04-24 10:06:37.872198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.644 #36 NEW cov: 11710 ft: 14478 corp: 28/759b lim: 40 exec/s: 36 rss: 69Mb L: 30/40 MS: 1 ChangeByte- 00:11:24.903 [2024-04-24 10:06:37.932184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15155b15 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.903 [2024-04-24 10:06:37.932209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.903 [2024-04-24 10:06:37.932298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.903 [2024-04-24 10:06:37.932312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.903 [2024-04-24 10:06:37.932401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15e8eaea cdw11:ea151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.903 [2024-04-24 10:06:37.932418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.903 #37 NEW cov: 11710 ft: 14513 corp: 29/789b lim: 40 exec/s: 37 rss: 69Mb L: 30/40 MS: 1 ChangeBinInt- 00:11:24.903 [2024-04-24 10:06:37.981959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:ea151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.903 [2024-04-24 10:06:37.981984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.903 [2024-04-24 10:06:37.982064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15eaeaea cdw11:1515151a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.903 [2024-04-24 10:06:37.982080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.903 #38 NEW cov: 11710 ft: 14560 corp: 30/805b lim: 40 exec/s: 38 rss: 69Mb L: 16/40 MS: 1 CopyPart- 00:11:24.903 [2024-04-24 10:06:38.032016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:158a1515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.903 [2024-04-24 10:06:38.032042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.903 #44 NEW cov: 11710 ft: 14579 corp: 31/818b lim: 40 exec/s: 44 rss: 69Mb L: 13/40 MS: 1 EraseBytes- 00:11:24.903 [2024-04-24 10:06:38.082678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.903 [2024-04-24 10:06:38.082704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.903 [2024-04-24 10:06:38.082787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.903 [2024-04-24 10:06:38.082803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.903 [2024-04-24 10:06:38.082893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0000001e cdw11:151515ea SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.903 [2024-04-24 10:06:38.082907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.903 #45 NEW cov: 11717 ft: 14590 corp: 32/848b lim: 40 exec/s: 45 rss: 69Mb L: 30/40 MS: 1 ChangeBinInt- 00:11:24.903 [2024-04-24 10:06:38.133203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:15151515 cdw11:95155515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.903 [2024-04-24 10:06:38.133227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.903 [2024-04-24 10:06:38.133315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15158a15 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.903 [2024-04-24 10:06:38.133331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.903 [2024-04-24 10:06:38.133430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.903 [2024-04-24 10:06:38.133447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:24.903 [2024-04-24 10:06:38.133540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:1515158a cdw11:15151515 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.903 [2024-04-24 10:06:38.133555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:24.903 #46 NEW cov: 11717 ft: 14596 corp: 33/883b lim: 40 exec/s: 46 rss: 69Mb L: 35/40 MS: 1 CopyPart- 00:11:25.163 [2024-04-24 10:06:38.182652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.163 [2024-04-24 10:06:38.182677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.163 #47 NEW cov: 11717 ft: 14663 corp: 34/898b lim: 40 exec/s: 23 rss: 70Mb L: 15/40 MS: 1 ChangeByte- 00:11:25.163 #47 DONE cov: 11717 ft: 14663 corp: 34/898b lim: 40 exec/s: 23 rss: 70Mb 00:11:25.163 Done 47 runs in 2 second(s) 00:11:25.163 10:06:38 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_10.conf 00:11:25.163 10:06:38 -- ../common.sh@72 -- # (( i++ )) 00:11:25.163 10:06:38 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:25.163 10:06:38 -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:11:25.163 10:06:38 -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:11:25.163 10:06:38 -- nvmf/run.sh@24 -- # local timen=1 00:11:25.163 10:06:38 -- nvmf/run.sh@25 -- # local core=0x1 00:11:25.163 10:06:38 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:11:25.163 10:06:38 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:11:25.163 10:06:38 -- nvmf/run.sh@29 -- # printf %02d 11 00:11:25.163 10:06:38 -- nvmf/run.sh@29 -- # port=4411 00:11:25.163 10:06:38 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:11:25.163 10:06:38 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:11:25.163 10:06:38 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:25.163 10:06:38 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 -r /var/tmp/spdk11.sock 00:11:25.163 [2024-04-24 10:06:38.382708] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:25.163 [2024-04-24 10:06:38.382783] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1171887 ] 00:11:25.163 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.727 [2024-04-24 10:06:38.699506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.727 [2024-04-24 10:06:38.790925] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:25.727 [2024-04-24 10:06:38.791066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.727 [2024-04-24 10:06:38.849709] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.727 [2024-04-24 10:06:38.865915] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:11:25.727 INFO: Running with entropic power schedule (0xFF, 100). 00:11:25.727 INFO: Seed: 462880044 00:11:25.727 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:25.727 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:25.727 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:11:25.727 INFO: A corpus is not provided, starting from an empty corpus 00:11:25.727 #2 INITED exec/s: 0 rss: 61Mb 00:11:25.727 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:25.727 This may also happen if the target rejected all inputs we tried so far 00:11:25.727 [2024-04-24 10:06:38.921768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:25.727 [2024-04-24 10:06:38.921797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.727 [2024-04-24 10:06:38.921854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:25.727 [2024-04-24 10:06:38.921872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:25.727 [2024-04-24 10:06:38.921930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:25.727 [2024-04-24 10:06:38.921944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:25.727 [2024-04-24 10:06:38.922001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:25.727 [2024-04-24 10:06:38.922015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:25.985 NEW_FUNC[1/660]: 0x48e740 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:11:25.985 NEW_FUNC[2/660]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:25.985 #7 NEW cov: 11456 ft: 11499 corp: 2/36b lim: 40 exec/s: 0 rss: 67Mb L: 35/35 MS: 5 InsertByte-ShuffleBytes-InsertByte-ChangeBit-InsertRepeatedBytes- 00:11:25.985 [2024-04-24 10:06:39.262346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a4db30a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:25.985 [2024-04-24 10:06:39.262392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.985 [2024-04-24 10:06:39.262451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:25.985 [2024-04-24 10:06:39.262465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.244 NEW_FUNC[1/4]: 0x171b450 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1088 00:11:26.244 NEW_FUNC[2/4]: 0x19548a0 in event_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:528 00:11:26.244 #17 NEW cov: 11614 ft: 12342 corp: 3/57b lim: 40 exec/s: 0 rss: 68Mb L: 21/35 MS: 5 ShuffleBytes-InsertRepeatedBytes-ChangeBinInt-EraseBytes-CrossOver- 00:11:26.244 [2024-04-24 10:06:39.302383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.302413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.244 [2024-04-24 10:06:39.302471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.302486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.244 #18 NEW cov: 11620 ft: 12619 corp: 4/80b lim: 40 exec/s: 0 rss: 68Mb L: 23/35 MS: 1 EraseBytes- 00:11:26.244 [2024-04-24 10:06:39.342426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a00b300 cdw11:0a004d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.342454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.244 [2024-04-24 10:06:39.342514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.342529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.244 #19 NEW cov: 11705 ft: 12913 corp: 5/101b lim: 40 exec/s: 0 rss: 68Mb L: 21/35 MS: 1 ShuffleBytes- 00:11:26.244 [2024-04-24 10:06:39.382920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.382948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.244 [2024-04-24 10:06:39.383008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.383022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.244 [2024-04-24 10:06:39.383076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.383090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:26.244 [2024-04-24 10:06:39.383160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.383174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:26.244 #20 NEW cov: 11705 ft: 12991 corp: 6/136b lim: 40 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 ShuffleBytes- 00:11:26.244 [2024-04-24 10:06:39.422988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.423014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.244 [2024-04-24 10:06:39.423078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.423092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.244 [2024-04-24 10:06:39.423149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.423163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:26.244 [2024-04-24 10:06:39.423220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000071 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.423233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:26.244 #21 NEW cov: 11705 ft: 13111 corp: 7/171b lim: 40 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 CrossOver- 00:11:26.244 [2024-04-24 10:06:39.462820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a00b300 cdw11:0a004d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.462846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.244 [2024-04-24 10:06:39.462905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:01060000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.462919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.244 #22 NEW cov: 11705 ft: 13177 corp: 8/194b lim: 40 exec/s: 0 rss: 68Mb L: 23/35 MS: 1 CMP- DE: "\001\006"- 00:11:26.244 [2024-04-24 10:06:39.503265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.503290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.244 [2024-04-24 10:06:39.503354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.503368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.244 [2024-04-24 10:06:39.503425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.503439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:26.244 [2024-04-24 10:06:39.503498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000071 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.244 [2024-04-24 10:06:39.503512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:26.502 #23 NEW cov: 11705 ft: 13210 corp: 9/229b lim: 40 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 ShuffleBytes- 00:11:26.502 [2024-04-24 10:06:39.543132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a00b300 cdw11:0a004d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.502 [2024-04-24 10:06:39.543159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.502 [2024-04-24 10:06:39.543230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.502 [2024-04-24 10:06:39.543245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.502 #29 NEW cov: 11705 ft: 13238 corp: 10/251b lim: 40 exec/s: 0 rss: 68Mb L: 22/35 MS: 1 InsertByte- 00:11:26.502 [2024-04-24 10:06:39.583013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a00b300 cdw11:0a004d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.502 [2024-04-24 10:06:39.583039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.502 #30 NEW cov: 11705 ft: 13997 corp: 11/266b lim: 40 exec/s: 0 rss: 68Mb L: 15/35 MS: 1 EraseBytes- 00:11:26.502 [2024-04-24 10:06:39.633322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000ab3 cdw11:00004d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.502 [2024-04-24 10:06:39.633347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.502 [2024-04-24 10:06:39.633405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.502 [2024-04-24 10:06:39.633420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.502 #31 NEW cov: 11705 ft: 14091 corp: 12/288b lim: 40 exec/s: 0 rss: 69Mb L: 22/35 MS: 1 ShuffleBytes- 00:11:26.502 [2024-04-24 10:06:39.673444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a00b300 cdw11:0a004d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.503 [2024-04-24 10:06:39.673471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.503 [2024-04-24 10:06:39.673530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.503 [2024-04-24 10:06:39.673545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.503 #32 NEW cov: 11705 ft: 14113 corp: 13/309b lim: 40 exec/s: 0 rss: 69Mb L: 21/35 MS: 1 CopyPart- 00:11:26.503 [2024-04-24 10:06:39.713355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000ab3 cdw11:00004d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.503 [2024-04-24 10:06:39.713383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.503 #33 NEW cov: 11705 ft: 14133 corp: 14/320b lim: 40 exec/s: 0 rss: 69Mb L: 11/35 MS: 1 EraseBytes- 00:11:26.503 [2024-04-24 10:06:39.763689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.503 [2024-04-24 10:06:39.763715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.503 [2024-04-24 10:06:39.763774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.503 [2024-04-24 10:06:39.763789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.762 #34 NEW cov: 11705 ft: 14217 corp: 15/340b lim: 40 exec/s: 0 rss: 69Mb L: 20/35 MS: 1 EraseBytes- 00:11:26.762 [2024-04-24 10:06:39.803974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:39.804000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.762 [2024-04-24 10:06:39.804064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:39.804078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.762 [2024-04-24 10:06:39.804134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00007136 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:39.804148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:26.762 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:26.762 #35 NEW cov: 11728 ft: 14450 corp: 16/364b lim: 40 exec/s: 0 rss: 69Mb L: 24/35 MS: 1 EraseBytes- 00:11:26.762 [2024-04-24 10:06:39.844051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:39.844095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.762 [2024-04-24 10:06:39.844154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:39.844169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.762 [2024-04-24 10:06:39.844227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00007136 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:39.844241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:26.762 #36 NEW cov: 11728 ft: 14487 corp: 17/388b lim: 40 exec/s: 0 rss: 69Mb L: 24/35 MS: 1 CopyPart- 00:11:26.762 [2024-04-24 10:06:39.884002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a4db30a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:39.884027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.762 [2024-04-24 10:06:39.884090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00001500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:39.884106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.762 #37 NEW cov: 11728 ft: 14569 corp: 18/409b lim: 40 exec/s: 37 rss: 69Mb L: 21/35 MS: 1 ChangeBinInt- 00:11:26.762 [2024-04-24 10:06:39.923972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a00b300 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:39.923998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.762 #38 NEW cov: 11728 ft: 14618 corp: 19/418b lim: 40 exec/s: 38 rss: 69Mb L: 9/35 MS: 1 CrossOver- 00:11:26.762 [2024-04-24 10:06:39.964252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000ab3 cdw11:00004d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:39.964278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.762 [2024-04-24 10:06:39.964338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:39.964352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.762 #39 NEW cov: 11728 ft: 14704 corp: 20/440b lim: 40 exec/s: 39 rss: 69Mb L: 22/35 MS: 1 ChangeBit- 00:11:26.762 [2024-04-24 10:06:40.004741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:40.004769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.762 [2024-04-24 10:06:40.004831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:40.004846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.762 [2024-04-24 10:06:40.004907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:40.004922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:26.762 [2024-04-24 10:06:40.004979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00007878 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.762 [2024-04-24 10:06:40.004994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:27.022 #40 NEW cov: 11728 ft: 14745 corp: 21/479b lim: 40 exec/s: 40 rss: 69Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:11:27.022 [2024-04-24 10:06:40.064608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a4db30a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.022 [2024-04-24 10:06:40.064646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.022 [2024-04-24 10:06:40.064706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00001500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.022 [2024-04-24 10:06:40.064721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.022 #41 NEW cov: 11728 ft: 14777 corp: 22/500b lim: 40 exec/s: 41 rss: 69Mb L: 21/39 MS: 1 ShuffleBytes- 00:11:27.022 [2024-04-24 10:06:40.114888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000000fc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.022 [2024-04-24 10:06:40.114920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.022 [2024-04-24 10:06:40.114982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.022 [2024-04-24 10:06:40.114996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.022 [2024-04-24 10:06:40.115053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00007136 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.022 [2024-04-24 10:06:40.115073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:27.022 #42 NEW cov: 11728 ft: 14794 corp: 23/524b lim: 40 exec/s: 42 rss: 69Mb L: 24/39 MS: 1 ChangeBinInt- 00:11:27.022 [2024-04-24 10:06:40.164722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30000ab3 cdw11:00004d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.022 [2024-04-24 10:06:40.164748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.022 #43 NEW cov: 11728 ft: 14808 corp: 24/535b lim: 40 exec/s: 43 rss: 69Mb L: 11/39 MS: 1 ChangeByte- 00:11:27.022 [2024-04-24 10:06:40.215485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a353535 cdw11:35353535 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.022 [2024-04-24 10:06:40.215511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.022 [2024-04-24 10:06:40.215570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.022 [2024-04-24 10:06:40.215585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.022 [2024-04-24 10:06:40.215645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353535 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.022 [2024-04-24 10:06:40.215659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:27.022 [2024-04-24 10:06:40.215718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:35353535 cdw11:35353535 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.022 [2024-04-24 10:06:40.215732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:27.022 [2024-04-24 10:06:40.215791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00b3000a cdw11:0000004d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.022 [2024-04-24 10:06:40.215806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:27.022 #44 NEW cov: 11728 ft: 14890 corp: 25/575b lim: 40 exec/s: 44 rss: 70Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:11:27.022 [2024-04-24 10:06:40.264984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a4db30a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.022 [2024-04-24 10:06:40.265009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.022 #45 NEW cov: 11728 ft: 14930 corp: 26/589b lim: 40 exec/s: 45 rss: 70Mb L: 14/40 MS: 1 CrossOver- 00:11:27.281 [2024-04-24 10:06:40.305295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a00b300 cdw11:0a004d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.281 [2024-04-24 10:06:40.305320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.281 [2024-04-24 10:06:40.305381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.281 [2024-04-24 10:06:40.305396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.281 #46 NEW cov: 11728 ft: 14940 corp: 27/610b lim: 40 exec/s: 46 rss: 70Mb L: 21/40 MS: 1 ShuffleBytes- 00:11:27.281 [2024-04-24 10:06:40.345865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a353535 cdw11:35353535 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.281 [2024-04-24 10:06:40.345890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.281 [2024-04-24 10:06:40.345953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.281 [2024-04-24 10:06:40.345968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.281 [2024-04-24 10:06:40.346027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353535 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.281 [2024-04-24 10:06:40.346043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:27.281 [2024-04-24 10:06:40.346103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:35b30000 cdw11:4d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.281 [2024-04-24 10:06:40.346118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:27.281 [2024-04-24 10:06:40.346176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.281 [2024-04-24 10:06:40.346191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:27.281 #47 NEW cov: 11728 ft: 14953 corp: 28/650b lim: 40 exec/s: 47 rss: 70Mb L: 40/40 MS: 1 CrossOver- 00:11:27.281 [2024-04-24 10:06:40.395847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.281 [2024-04-24 10:06:40.395873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.281 [2024-04-24 10:06:40.395934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.281 [2024-04-24 10:06:40.395949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.282 [2024-04-24 10:06:40.396008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.282 [2024-04-24 10:06:40.396023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:27.282 [2024-04-24 10:06:40.396080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.282 [2024-04-24 10:06:40.396095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:27.282 #48 NEW cov: 11728 ft: 14959 corp: 29/685b lim: 40 exec/s: 48 rss: 70Mb L: 35/40 MS: 1 CopyPart- 00:11:27.282 [2024-04-24 10:06:40.435929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.282 [2024-04-24 10:06:40.435955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.282 [2024-04-24 10:06:40.436016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.282 [2024-04-24 10:06:40.436031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.282 [2024-04-24 10:06:40.436095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.282 [2024-04-24 10:06:40.436109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:27.282 [2024-04-24 10:06:40.436166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:06000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.282 [2024-04-24 10:06:40.436181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:27.282 #49 NEW cov: 11728 ft: 15000 corp: 30/722b lim: 40 exec/s: 49 rss: 70Mb L: 37/40 MS: 1 PersAutoDict- DE: "\001\006"- 00:11:27.282 [2024-04-24 10:06:40.475577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:300013b3 cdw11:00004d00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.282 [2024-04-24 10:06:40.475602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.282 #50 NEW cov: 11728 ft: 15100 corp: 31/733b lim: 40 exec/s: 50 rss: 70Mb L: 11/40 MS: 1 ChangeBinInt- 00:11:27.282 [2024-04-24 10:06:40.525921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.282 [2024-04-24 10:06:40.525946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.282 [2024-04-24 10:06:40.526008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.282 [2024-04-24 10:06:40.526022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.282 #51 NEW cov: 11728 ft: 15120 corp: 32/754b lim: 40 exec/s: 51 rss: 70Mb L: 21/40 MS: 1 CopyPart- 00:11:27.541 [2024-04-24 10:06:40.566028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.541 [2024-04-24 10:06:40.566053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.541 [2024-04-24 10:06:40.566115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.541 [2024-04-24 10:06:40.566129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.541 #52 NEW cov: 11728 ft: 15128 corp: 33/776b lim: 40 exec/s: 52 rss: 70Mb L: 22/40 MS: 1 EraseBytes- 00:11:27.541 [2024-04-24 10:06:40.606324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.541 [2024-04-24 10:06:40.606351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.541 [2024-04-24 10:06:40.606412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.541 [2024-04-24 10:06:40.606427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.541 [2024-04-24 10:06:40.606486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00007136 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.541 [2024-04-24 10:06:40.606501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:27.541 #53 NEW cov: 11728 ft: 15140 corp: 34/800b lim: 40 exec/s: 53 rss: 70Mb L: 24/40 MS: 1 ChangeASCIIInt- 00:11:27.541 [2024-04-24 10:06:40.646632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.541 [2024-04-24 10:06:40.646657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.541 [2024-04-24 10:06:40.646718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.541 [2024-04-24 10:06:40.646731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.541 [2024-04-24 10:06:40.646791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.541 [2024-04-24 10:06:40.646804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:27.541 [2024-04-24 10:06:40.646861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.541 [2024-04-24 10:06:40.646875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:27.541 #54 NEW cov: 11728 ft: 15169 corp: 35/835b lim: 40 exec/s: 54 rss: 70Mb L: 35/40 MS: 1 ChangeByte- 00:11:27.541 [2024-04-24 10:06:40.686249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000ab3 cdw11:0000004d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.541 [2024-04-24 10:06:40.686275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.541 #55 NEW cov: 11728 ft: 15175 corp: 36/846b lim: 40 exec/s: 55 rss: 70Mb L: 11/40 MS: 1 CopyPart- 00:11:27.541 [2024-04-24 10:06:40.726984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a3535cb cdw11:c0353535 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.541 [2024-04-24 10:06:40.727011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.541 [2024-04-24 10:06:40.727067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:35353535 cdw11:35353535 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.541 [2024-04-24 10:06:40.727080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.542 [2024-04-24 10:06:40.727137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:35353535 cdw11:35353535 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.542 [2024-04-24 10:06:40.727152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:27.542 [2024-04-24 10:06:40.727209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:35b30000 cdw11:4d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.542 [2024-04-24 10:06:40.727223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:27.542 [2024-04-24 10:06:40.727282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.542 [2024-04-24 10:06:40.727297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:27.542 #56 NEW cov: 11728 ft: 15202 corp: 37/886b lim: 40 exec/s: 56 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:11:27.542 [2024-04-24 10:06:40.766630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a4db30a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.542 [2024-04-24 10:06:40.766656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.542 [2024-04-24 10:06:40.766722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000032 cdw11:00001500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.542 [2024-04-24 10:06:40.766736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.542 #57 NEW cov: 11728 ft: 15205 corp: 38/907b lim: 40 exec/s: 57 rss: 70Mb L: 21/40 MS: 1 ChangeByte- 00:11:27.542 [2024-04-24 10:06:40.806598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30000ab3 cdw11:00004d09 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.542 [2024-04-24 10:06:40.806624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.801 #58 NEW cov: 11728 ft: 15207 corp: 39/918b lim: 40 exec/s: 58 rss: 70Mb L: 11/40 MS: 1 CMP- DE: "\011\000"- 00:11:27.801 [2024-04-24 10:06:40.847196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.801 [2024-04-24 10:06:40.847224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.801 [2024-04-24 10:06:40.847284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00310000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.801 [2024-04-24 10:06:40.847298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:27.801 [2024-04-24 10:06:40.847361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.801 [2024-04-24 10:06:40.847376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:27.801 [2024-04-24 10:06:40.847448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.801 [2024-04-24 10:06:40.847461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:27.801 #59 NEW cov: 11728 ft: 15261 corp: 40/954b lim: 40 exec/s: 59 rss: 70Mb L: 36/40 MS: 1 InsertByte- 00:11:27.801 [2024-04-24 10:06:40.886820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a4db30a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:27.801 [2024-04-24 10:06:40.886847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.801 #60 NEW cov: 11728 ft: 15279 corp: 41/968b lim: 40 exec/s: 30 rss: 70Mb L: 14/40 MS: 1 PersAutoDict- DE: "\001\006"- 00:11:27.801 #60 DONE cov: 11728 ft: 15279 corp: 41/968b lim: 40 exec/s: 30 rss: 70Mb 00:11:27.801 ###### Recommended dictionary. ###### 00:11:27.801 "\001\006" # Uses: 3 00:11:27.801 "\011\000" # Uses: 0 00:11:27.801 ###### End of recommended dictionary. ###### 00:11:27.801 Done 60 runs in 2 second(s) 00:11:27.801 10:06:41 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_11.conf 00:11:27.801 10:06:41 -- ../common.sh@72 -- # (( i++ )) 00:11:27.801 10:06:41 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:27.801 10:06:41 -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:11:27.801 10:06:41 -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:11:27.801 10:06:41 -- nvmf/run.sh@24 -- # local timen=1 00:11:27.801 10:06:41 -- nvmf/run.sh@25 -- # local core=0x1 00:11:27.801 10:06:41 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:11:27.801 10:06:41 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:11:27.801 10:06:41 -- nvmf/run.sh@29 -- # printf %02d 12 00:11:27.801 10:06:41 -- nvmf/run.sh@29 -- # port=4412 00:11:27.801 10:06:41 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:11:27.801 10:06:41 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:11:27.801 10:06:41 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:27.802 10:06:41 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 -r /var/tmp/spdk12.sock 00:11:28.061 [2024-04-24 10:06:41.097379] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:28.061 [2024-04-24 10:06:41.097455] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1172254 ] 00:11:28.061 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.320 [2024-04-24 10:06:41.416225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.320 [2024-04-24 10:06:41.501689] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:28.320 [2024-04-24 10:06:41.501830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.320 [2024-04-24 10:06:41.560427] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.320 [2024-04-24 10:06:41.576618] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:11:28.320 INFO: Running with entropic power schedule (0xFF, 100). 00:11:28.320 INFO: Seed: 3171873553 00:11:28.579 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:28.579 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:28.579 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:11:28.579 INFO: A corpus is not provided, starting from an empty corpus 00:11:28.579 #2 INITED exec/s: 0 rss: 61Mb 00:11:28.579 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:28.579 This may also happen if the target rejected all inputs we tried so far 00:11:28.579 [2024-04-24 10:06:41.625499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111eb cdw11:582b0900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:28.579 [2024-04-24 10:06:41.625529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.838 NEW_FUNC[1/664]: 0x4904b0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:11:28.838 NEW_FUNC[2/664]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:28.838 #16 NEW cov: 11499 ft: 11500 corp: 2/10b lim: 40 exec/s: 0 rss: 67Mb L: 9/9 MS: 4 ChangeBit-CopyPart-ShuffleBytes-CMP- DE: "\337A\021\353X+\011\000"- 00:11:28.838 [2024-04-24 10:06:41.956291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111eb cdw11:582b0900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:28.838 [2024-04-24 10:06:41.956340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.838 #17 NEW cov: 11612 ft: 12111 corp: 3/19b lim: 40 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 CopyPart- 00:11:28.838 [2024-04-24 10:06:42.006279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adf4111 cdw11:eb582b09 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:28.838 [2024-04-24 10:06:42.006306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.838 #21 NEW cov: 11618 ft: 12320 corp: 4/31b lim: 40 exec/s: 0 rss: 68Mb L: 12/12 MS: 4 ShuffleBytes-InsertRepeatedBytes-ShuffleBytes-PersAutoDict- DE: "\337A\021\353X+\011\000"- 00:11:28.838 [2024-04-24 10:06:42.046363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adf1141 cdw11:eb582b09 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:28.838 [2024-04-24 10:06:42.046390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.838 #22 NEW cov: 11703 ft: 12541 corp: 5/43b lim: 40 exec/s: 0 rss: 68Mb L: 12/12 MS: 1 ShuffleBytes- 00:11:28.838 [2024-04-24 10:06:42.086662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111df cdw11:4111eb58 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:28.838 [2024-04-24 10:06:42.086689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.838 [2024-04-24 10:06:42.086743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:2b0900eb cdw11:582b0900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:28.838 [2024-04-24 10:06:42.086757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:28.838 #23 NEW cov: 11703 ft: 13272 corp: 6/60b lim: 40 exec/s: 0 rss: 68Mb L: 17/17 MS: 1 PersAutoDict- DE: "\337A\021\353X+\011\000"- 00:11:29.096 [2024-04-24 10:06:42.127068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111eb cdw11:582b097e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.127096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.096 [2024-04-24 10:06:42.127151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:7e7e7e7e cdw11:7e7e7e7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.127165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.096 [2024-04-24 10:06:42.127217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:7e7e7e7e cdw11:7e7e7e7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.127231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:29.096 [2024-04-24 10:06:42.127283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:7e7e7e7e cdw11:7e7e7e00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.127297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:29.096 #24 NEW cov: 11703 ft: 13703 corp: 7/93b lim: 40 exec/s: 0 rss: 68Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:11:29.096 [2024-04-24 10:06:42.166856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111eb cdw11:0adf4111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.166882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.096 [2024-04-24 10:06:42.166937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:eb582b09 cdw11:00582b09 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.166950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.096 #25 NEW cov: 11703 ft: 13804 corp: 8/114b lim: 40 exec/s: 0 rss: 68Mb L: 21/33 MS: 1 CrossOver- 00:11:29.096 [2024-04-24 10:06:42.206847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df411155 cdw11:000000eb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.206874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.096 #26 NEW cov: 11703 ft: 13851 corp: 9/127b lim: 40 exec/s: 0 rss: 68Mb L: 13/33 MS: 1 CMP- DE: "U\000\000\000"- 00:11:29.096 [2024-04-24 10:06:42.247087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111df cdw11:4111eb2b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.247114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.096 [2024-04-24 10:06:42.247173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0900eb58 cdw11:2b09004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.247187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.096 #32 NEW cov: 11703 ft: 13895 corp: 10/143b lim: 40 exec/s: 0 rss: 68Mb L: 16/33 MS: 1 EraseBytes- 00:11:29.096 [2024-04-24 10:06:42.287354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111eb cdw11:582b097e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.287379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.096 [2024-04-24 10:06:42.287434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:7e7e7e7e cdw11:7e7e7e7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.287448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.096 [2024-04-24 10:06:42.287500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:7e7e7e7e cdw11:7e7e7e7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.287515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:29.096 #33 NEW cov: 11703 ft: 14118 corp: 11/171b lim: 40 exec/s: 0 rss: 68Mb L: 28/33 MS: 1 EraseBytes- 00:11:29.096 [2024-04-24 10:06:42.327454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111df cdw11:4111eb2b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.327480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.096 [2024-04-24 10:06:42.327533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0900eb01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.327547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.096 [2024-04-24 10:06:42.327600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000058 cdw11:2b09004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.327615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:29.096 #34 NEW cov: 11703 ft: 14169 corp: 12/195b lim: 40 exec/s: 0 rss: 68Mb L: 24/33 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:11:29.096 [2024-04-24 10:06:42.367612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111df cdw11:4111eb2b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.367639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.096 [2024-04-24 10:06:42.367693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0900eb01 cdw11:00180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.367706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.096 [2024-04-24 10:06:42.367759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000058 cdw11:2b09004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.096 [2024-04-24 10:06:42.367773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:29.353 #35 NEW cov: 11703 ft: 14184 corp: 13/219b lim: 40 exec/s: 0 rss: 68Mb L: 24/33 MS: 1 ChangeBinInt- 00:11:29.353 [2024-04-24 10:06:42.407397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:eb2b41df cdw11:11005809 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.407425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.353 #36 NEW cov: 11703 ft: 14244 corp: 14/228b lim: 40 exec/s: 0 rss: 68Mb L: 9/33 MS: 1 ShuffleBytes- 00:11:29.353 [2024-04-24 10:06:42.447780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111df cdw11:4111eb58 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.447805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.353 [2024-04-24 10:06:42.447860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:2b097e7e cdw11:7e7e7e7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.447874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.353 [2024-04-24 10:06:42.447927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:7e7e7e7e cdw11:7e7e7e7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.447940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:29.353 #37 NEW cov: 11703 ft: 14338 corp: 15/256b lim: 40 exec/s: 0 rss: 69Mb L: 28/33 MS: 1 CopyPart- 00:11:29.353 [2024-04-24 10:06:42.487884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df411158 cdw11:2b0900eb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.487911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.353 [2024-04-24 10:06:42.487966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:582bdf41 cdw11:11eb582b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.487981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.353 [2024-04-24 10:06:42.488032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0900eb58 cdw11:2b09004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.488046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:29.353 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:29.353 #38 NEW cov: 11726 ft: 14352 corp: 16/280b lim: 40 exec/s: 0 rss: 69Mb L: 24/33 MS: 1 CopyPart- 00:11:29.353 [2024-04-24 10:06:42.527734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:eb2b41dd cdw11:11005809 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.527760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.353 #39 NEW cov: 11726 ft: 14429 corp: 17/289b lim: 40 exec/s: 0 rss: 69Mb L: 9/33 MS: 1 ChangeBit- 00:11:29.353 [2024-04-24 10:06:42.568021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111df cdw11:4111eb2b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.568046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.353 [2024-04-24 10:06:42.568105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0900582b cdw11:eb09004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.568120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.353 #40 NEW cov: 11726 ft: 14448 corp: 18/305b lim: 40 exec/s: 0 rss: 69Mb L: 16/33 MS: 1 ShuffleBytes- 00:11:29.353 [2024-04-24 10:06:42.608423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111eb cdw11:582b097e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.608451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.353 [2024-04-24 10:06:42.608504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:7e7e7e7e cdw11:7e7e7e7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.608518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.353 [2024-04-24 10:06:42.608571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:7e7e7e7e cdw11:7e7e7e7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.608584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:29.353 [2024-04-24 10:06:42.608635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:7e7e7e7e cdw11:7e7e7e00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.353 [2024-04-24 10:06:42.608648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:29.353 #41 NEW cov: 11726 ft: 14457 corp: 19/338b lim: 40 exec/s: 41 rss: 69Mb L: 33/33 MS: 1 ShuffleBytes- 00:11:29.610 [2024-04-24 10:06:42.648555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111eb cdw11:582b097e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.610 [2024-04-24 10:06:42.648581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.610 [2024-04-24 10:06:42.648634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:7e7e7e7e cdw11:7e7e7e7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.610 [2024-04-24 10:06:42.648648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.610 [2024-04-24 10:06:42.648701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:7e7e7e7e cdw11:7e7e7e7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.610 [2024-04-24 10:06:42.648716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:29.610 [2024-04-24 10:06:42.648768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:7e7e7e7f cdw11:7e7e7e00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.610 [2024-04-24 10:06:42.648781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:29.610 #42 NEW cov: 11726 ft: 14481 corp: 20/371b lim: 40 exec/s: 42 rss: 69Mb L: 33/33 MS: 1 ChangeBit- 00:11:29.610 [2024-04-24 10:06:42.688374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111eb cdw11:0adf4111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.611 [2024-04-24 10:06:42.688398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.611 [2024-04-24 10:06:42.688453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:eb582b09 cdw11:0058ff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.611 [2024-04-24 10:06:42.688467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.611 #43 NEW cov: 11726 ft: 14539 corp: 21/390b lim: 40 exec/s: 43 rss: 69Mb L: 19/33 MS: 1 EraseBytes- 00:11:29.611 [2024-04-24 10:06:42.728608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111df cdw11:4111eb2b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.611 [2024-04-24 10:06:42.728634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.611 [2024-04-24 10:06:42.728690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0900eb01 cdw11:00180000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.611 [2024-04-24 10:06:42.728707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.611 [2024-04-24 10:06:42.728760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:000000a8 cdw11:d4f7024a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.611 [2024-04-24 10:06:42.728774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:29.611 #44 NEW cov: 11726 ft: 14555 corp: 22/414b lim: 40 exec/s: 44 rss: 69Mb L: 24/33 MS: 1 ChangeBinInt- 00:11:29.611 [2024-04-24 10:06:42.768903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adf1141 cdw11:eb582b09 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.611 [2024-04-24 10:06:42.768928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.611 [2024-04-24 10:06:42.768981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.611 [2024-04-24 10:06:42.768995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.611 [2024-04-24 10:06:42.769047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.611 [2024-04-24 10:06:42.769065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:29.611 [2024-04-24 10:06:42.769116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.611 [2024-04-24 10:06:42.769131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:29.611 #45 NEW cov: 11726 ft: 14559 corp: 23/446b lim: 40 exec/s: 45 rss: 69Mb L: 32/33 MS: 1 InsertRepeatedBytes- 00:11:29.611 [2024-04-24 10:06:42.808557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e92b41df cdw11:1109284a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.611 [2024-04-24 10:06:42.808582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.611 #48 NEW cov: 11726 ft: 14602 corp: 24/454b lim: 40 exec/s: 48 rss: 69Mb L: 8/33 MS: 3 EraseBytes-ChangeBit-InsertByte- 00:11:29.611 [2024-04-24 10:06:42.848670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111eb cdw11:582b0900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.611 [2024-04-24 10:06:42.848695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.611 #49 NEW cov: 11726 ft: 14621 corp: 25/463b lim: 40 exec/s: 49 rss: 69Mb L: 9/33 MS: 1 CopyPart- 00:11:29.869 [2024-04-24 10:06:42.889256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111eb cdw11:58ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.869 [2024-04-24 10:06:42.889281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.869 [2024-04-24 10:06:42.889337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ff2b097e cdw11:7e7e7e7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.869 [2024-04-24 10:06:42.889351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.869 [2024-04-24 10:06:42.889404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:7e7e7e7e cdw11:7e7e7e7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.869 [2024-04-24 10:06:42.889418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:29.869 [2024-04-24 10:06:42.889472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:7e7e7e7e cdw11:7e7e7e7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.869 [2024-04-24 10:06:42.889488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:29.869 #50 NEW cov: 11726 ft: 14700 corp: 26/500b lim: 40 exec/s: 50 rss: 69Mb L: 37/37 MS: 1 CMP- DE: "\377\377\377\377"- 00:11:29.869 [2024-04-24 10:06:42.939122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111eb cdw11:2b5841eb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.869 [2024-04-24 10:06:42.939147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.869 [2024-04-24 10:06:42.939203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:df0a1109 cdw11:00582b09 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.869 [2024-04-24 10:06:42.939218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.869 #51 NEW cov: 11726 ft: 14732 corp: 27/521b lim: 40 exec/s: 51 rss: 70Mb L: 21/37 MS: 1 ShuffleBytes- 00:11:29.869 [2024-04-24 10:06:42.979176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111df cdw11:41b5b5b5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.869 [2024-04-24 10:06:42.979201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.869 [2024-04-24 10:06:42.979255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b5b5b511 cdw11:eb582b09 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.869 [2024-04-24 10:06:42.979269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.869 #52 NEW cov: 11726 ft: 14736 corp: 28/544b lim: 40 exec/s: 52 rss: 70Mb L: 23/37 MS: 1 InsertRepeatedBytes- 00:11:29.869 [2024-04-24 10:06:43.019437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111df cdw11:4111eb2b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.870 [2024-04-24 10:06:43.019463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.870 [2024-04-24 10:06:43.019520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:09550000 cdw11:0000eb01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.870 [2024-04-24 10:06:43.019534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.870 [2024-04-24 10:06:43.019588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00180000 cdw11:000000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.870 [2024-04-24 10:06:43.019602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:29.870 #53 NEW cov: 11726 ft: 14755 corp: 29/572b lim: 40 exec/s: 53 rss: 70Mb L: 28/37 MS: 1 PersAutoDict- DE: "U\000\000\000"- 00:11:29.870 [2024-04-24 10:06:43.059396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df411158 cdw11:2b0900eb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.870 [2024-04-24 10:06:43.059420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.870 [2024-04-24 10:06:43.059475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:582bdf41 cdw11:11eb582b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.870 [2024-04-24 10:06:43.059489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.870 #54 NEW cov: 11726 ft: 14765 corp: 30/595b lim: 40 exec/s: 54 rss: 70Mb L: 23/37 MS: 1 EraseBytes- 00:11:29.870 [2024-04-24 10:06:43.099348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adf4111 cdw11:00eb092b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.870 [2024-04-24 10:06:43.099378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.870 #55 NEW cov: 11726 ft: 14804 corp: 31/607b lim: 40 exec/s: 55 rss: 70Mb L: 12/37 MS: 1 ShuffleBytes- 00:11:29.870 [2024-04-24 10:06:43.139447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:eb2bff41 cdw11:df110058 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:29.870 [2024-04-24 10:06:43.139472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.128 #56 NEW cov: 11726 ft: 14811 corp: 32/617b lim: 40 exec/s: 56 rss: 70Mb L: 10/37 MS: 1 InsertByte- 00:11:30.128 [2024-04-24 10:06:43.180009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111df cdw11:41b5b5b5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.128 [2024-04-24 10:06:43.180034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.128 [2024-04-24 10:06:43.180093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b5b5b511 cdw11:eb582b09 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.128 [2024-04-24 10:06:43.180108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.128 [2024-04-24 10:06:43.180165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00eb0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.128 [2024-04-24 10:06:43.180179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.128 [2024-04-24 10:06:43.180232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:582b0900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.128 [2024-04-24 10:06:43.180245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:30.128 #57 NEW cov: 11726 ft: 14851 corp: 33/650b lim: 40 exec/s: 57 rss: 70Mb L: 33/37 MS: 1 InsertRepeatedBytes- 00:11:30.128 [2024-04-24 10:06:43.219871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4191eb cdw11:0adf4111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.128 [2024-04-24 10:06:43.219896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.129 [2024-04-24 10:06:43.219952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:eb582b09 cdw11:0058ff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.129 [2024-04-24 10:06:43.219966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.129 #58 NEW cov: 11726 ft: 14868 corp: 34/669b lim: 40 exec/s: 58 rss: 70Mb L: 19/37 MS: 1 ChangeBit- 00:11:30.129 [2024-04-24 10:06:43.259857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a30df41 cdw11:11eb582b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.129 [2024-04-24 10:06:43.259884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.129 #59 NEW cov: 11726 ft: 14883 corp: 35/682b lim: 40 exec/s: 59 rss: 70Mb L: 13/37 MS: 1 InsertByte- 00:11:30.129 [2024-04-24 10:06:43.289918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df110041 cdw11:df110058 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.129 [2024-04-24 10:06:43.289944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.129 #60 NEW cov: 11726 ft: 14884 corp: 36/692b lim: 40 exec/s: 60 rss: 70Mb L: 10/37 MS: 1 CopyPart- 00:11:30.129 [2024-04-24 10:06:43.330201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111eb cdw11:0adf4111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.129 [2024-04-24 10:06:43.330230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.129 [2024-04-24 10:06:43.330284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:eb582b09 cdw11:00582b09 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.129 [2024-04-24 10:06:43.330299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.129 #61 NEW cov: 11726 ft: 14887 corp: 37/711b lim: 40 exec/s: 61 rss: 70Mb L: 19/37 MS: 1 EraseBytes- 00:11:30.129 [2024-04-24 10:06:43.370309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:eb2b41eb cdw11:2b41dd11 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.129 [2024-04-24 10:06:43.370335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.129 [2024-04-24 10:06:43.370391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0058094a cdw11:dd110058 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.129 [2024-04-24 10:06:43.370405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.129 #62 NEW cov: 11726 ft: 14910 corp: 38/729b lim: 40 exec/s: 62 rss: 70Mb L: 18/37 MS: 1 CopyPart- 00:11:30.388 [2024-04-24 10:06:43.410599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df4111df cdw11:41ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.410625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.388 [2024-04-24 10:06:43.410682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff11eb2b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.410697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.388 [2024-04-24 10:06:43.410752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0900eb58 cdw11:2b09004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.410766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.388 #63 NEW cov: 11726 ft: 14914 corp: 39/753b lim: 40 exec/s: 63 rss: 70Mb L: 24/37 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:11:30.388 [2024-04-24 10:06:43.450690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df411158 cdw11:2b0900eb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.450717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.388 [2024-04-24 10:06:43.450772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:582bdf41 cdw11:3211eb58 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.450788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.388 [2024-04-24 10:06:43.450842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:2b09eb58 cdw11:2b09004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.450856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.388 #64 NEW cov: 11726 ft: 14923 corp: 40/777b lim: 40 exec/s: 64 rss: 70Mb L: 24/37 MS: 1 InsertByte- 00:11:30.388 [2024-04-24 10:06:43.490509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:eb2bff00 cdw11:584111df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.490535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.388 #65 NEW cov: 11726 ft: 14980 corp: 41/787b lim: 40 exec/s: 65 rss: 70Mb L: 10/37 MS: 1 ShuffleBytes- 00:11:30.388 [2024-04-24 10:06:43.530935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df411175 cdw11:582b0900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.530961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.388 [2024-04-24 10:06:43.531018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:eb582bdf cdw11:4111eb58 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.531031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.388 [2024-04-24 10:06:43.531089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:2b09eb58 cdw11:2b09004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.531103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.388 #66 NEW cov: 11726 ft: 14983 corp: 42/811b lim: 40 exec/s: 66 rss: 70Mb L: 24/37 MS: 1 InsertByte- 00:11:30.388 [2024-04-24 10:06:43.570913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a2bdf41 cdw11:0911eb0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.570939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.388 [2024-04-24 10:06:43.570994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:df00ffff cdw11:ff4111eb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.571008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.388 #68 NEW cov: 11726 ft: 14991 corp: 43/827b lim: 40 exec/s: 68 rss: 70Mb L: 16/37 MS: 2 EraseBytes-CrossOver- 00:11:30.388 [2024-04-24 10:06:43.611230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:df411111 cdw11:582b0900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.611257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.388 [2024-04-24 10:06:43.611315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:eb582bdf cdw11:4111eb58 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.388 [2024-04-24 10:06:43.611330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.389 [2024-04-24 10:06:43.611384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:2b0900eb cdw11:582b004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:30.389 [2024-04-24 10:06:43.611398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.389 #69 NEW cov: 11726 ft: 15066 corp: 44/851b lim: 40 exec/s: 34 rss: 71Mb L: 24/37 MS: 1 CopyPart- 00:11:30.389 #69 DONE cov: 11726 ft: 15066 corp: 44/851b lim: 40 exec/s: 34 rss: 71Mb 00:11:30.389 ###### Recommended dictionary. ###### 00:11:30.389 "\337A\021\353X+\011\000" # Uses: 2 00:11:30.389 "U\000\000\000" # Uses: 1 00:11:30.389 "\001\000\000\000\000\000\000\000" # Uses: 0 00:11:30.389 "\377\377\377\377" # Uses: 0 00:11:30.389 "\377\377\377\377\377\377\377\377" # Uses: 0 00:11:30.389 ###### End of recommended dictionary. ###### 00:11:30.389 Done 69 runs in 2 second(s) 00:11:30.648 10:06:43 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_12.conf 00:11:30.648 10:06:43 -- ../common.sh@72 -- # (( i++ )) 00:11:30.648 10:06:43 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:30.648 10:06:43 -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:11:30.648 10:06:43 -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:11:30.648 10:06:43 -- nvmf/run.sh@24 -- # local timen=1 00:11:30.648 10:06:43 -- nvmf/run.sh@25 -- # local core=0x1 00:11:30.648 10:06:43 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:11:30.648 10:06:43 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:11:30.648 10:06:43 -- nvmf/run.sh@29 -- # printf %02d 13 00:11:30.648 10:06:43 -- nvmf/run.sh@29 -- # port=4413 00:11:30.648 10:06:43 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:11:30.648 10:06:43 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:11:30.648 10:06:43 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:30.648 10:06:43 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 -r /var/tmp/spdk13.sock 00:11:30.648 [2024-04-24 10:06:43.816045] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:30.648 [2024-04-24 10:06:43.816139] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1172629 ] 00:11:30.648 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.907 [2024-04-24 10:06:44.126501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.166 [2024-04-24 10:06:44.219670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:31.166 [2024-04-24 10:06:44.219798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.166 [2024-04-24 10:06:44.278586] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.166 [2024-04-24 10:06:44.294783] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:11:31.166 INFO: Running with entropic power schedule (0xFF, 100). 00:11:31.166 INFO: Seed: 1595909724 00:11:31.166 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:31.166 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:31.166 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:11:31.166 INFO: A corpus is not provided, starting from an empty corpus 00:11:31.166 #2 INITED exec/s: 0 rss: 61Mb 00:11:31.166 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:31.166 This may also happen if the target rejected all inputs we tried so far 00:11:31.166 [2024-04-24 10:06:44.350371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.166 [2024-04-24 10:06:44.350403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.166 [2024-04-24 10:06:44.350463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.166 [2024-04-24 10:06:44.350477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.166 [2024-04-24 10:06:44.350535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.166 [2024-04-24 10:06:44.350550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.425 NEW_FUNC[1/663]: 0x492070 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:11:31.425 NEW_FUNC[2/663]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:31.425 #25 NEW cov: 11487 ft: 11488 corp: 2/27b lim: 40 exec/s: 0 rss: 68Mb L: 26/26 MS: 3 InsertByte-ChangeBinInt-InsertRepeatedBytes- 00:11:31.425 [2024-04-24 10:06:44.671243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.425 [2024-04-24 10:06:44.671285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.425 [2024-04-24 10:06:44.671343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f0a2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.425 [2024-04-24 10:06:44.671356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.425 [2024-04-24 10:06:44.671414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.425 [2024-04-24 10:06:44.671428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.425 #26 NEW cov: 11600 ft: 11958 corp: 3/53b lim: 40 exec/s: 0 rss: 68Mb L: 26/26 MS: 1 CrossOver- 00:11:31.685 [2024-04-24 10:06:44.721287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.721318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.685 [2024-04-24 10:06:44.721379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f0a2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.721394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.685 [2024-04-24 10:06:44.721452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.721466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.685 #32 NEW cov: 11606 ft: 12214 corp: 4/80b lim: 40 exec/s: 0 rss: 69Mb L: 27/27 MS: 1 CrossOver- 00:11:31.685 [2024-04-24 10:06:44.761401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.761429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.685 [2024-04-24 10:06:44.761488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f0a2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.761502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.685 [2024-04-24 10:06:44.761559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.761574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.685 #33 NEW cov: 11691 ft: 12476 corp: 5/106b lim: 40 exec/s: 0 rss: 69Mb L: 26/27 MS: 1 ChangeByte- 00:11:31.685 [2024-04-24 10:06:44.801548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f6b2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.801575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.685 [2024-04-24 10:06:44.801636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.801651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.685 [2024-04-24 10:06:44.801710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.801725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.685 #34 NEW cov: 11691 ft: 12659 corp: 6/133b lim: 40 exec/s: 0 rss: 69Mb L: 27/27 MS: 1 InsertByte- 00:11:31.685 [2024-04-24 10:06:44.841843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2fff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.841868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.685 [2024-04-24 10:06:44.841930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.841943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.685 [2024-04-24 10:06:44.842001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.842014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.685 [2024-04-24 10:06:44.842072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:2f2f2f0a cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.842086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:31.685 [2024-04-24 10:06:44.842144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f4c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.842158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:31.685 #35 NEW cov: 11691 ft: 13289 corp: 7/173b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:11:31.685 [2024-04-24 10:06:44.891791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.891816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.685 [2024-04-24 10:06:44.891876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2ffeff00 cdw11:000a2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.685 [2024-04-24 10:06:44.891890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.685 [2024-04-24 10:06:44.891947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.686 [2024-04-24 10:06:44.891961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.686 #36 NEW cov: 11691 ft: 13398 corp: 8/200b lim: 40 exec/s: 0 rss: 69Mb L: 27/40 MS: 1 CMP- DE: "\376\377\000\000"- 00:11:31.686 [2024-04-24 10:06:44.931870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.686 [2024-04-24 10:06:44.931896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.686 [2024-04-24 10:06:44.931943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1a2f2f2f cdw11:2f0a2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.686 [2024-04-24 10:06:44.931957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.686 [2024-04-24 10:06:44.932015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.686 [2024-04-24 10:06:44.932029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.686 #37 NEW cov: 11691 ft: 13465 corp: 9/226b lim: 40 exec/s: 0 rss: 69Mb L: 26/40 MS: 1 ChangeBinInt- 00:11:31.945 [2024-04-24 10:06:44.972237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2fff cdw11:fffffff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:44.972263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:44.972338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:44.972352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:44.972410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:44.972423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:44.972479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:2f2f2f0a cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:44.972493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:44.972548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f4c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:44.972562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:31.945 #38 NEW cov: 11691 ft: 13484 corp: 10/266b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 ChangeBinInt- 00:11:31.945 [2024-04-24 10:06:45.012418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2fff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.012446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:45.012504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.012520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:45.012578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.012595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:45.012654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:2f2f2f0a cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.012669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:45.012725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.012741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:31.945 #39 NEW cov: 11691 ft: 13575 corp: 11/306b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CopyPart- 00:11:31.945 [2024-04-24 10:06:45.052485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2fff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.052510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:45.052549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.052563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:45.052619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.052633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:45.052688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0a2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.052702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:45.052759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.052772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:31.945 #40 NEW cov: 11691 ft: 13594 corp: 12/346b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CopyPart- 00:11:31.945 [2024-04-24 10:06:45.092327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02ffeff cdw11:00002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.092353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:45.092412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.092427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:45.092484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f0a2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.092497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.945 #41 NEW cov: 11691 ft: 13626 corp: 13/376b lim: 40 exec/s: 0 rss: 69Mb L: 30/40 MS: 1 PersAutoDict- DE: "\376\377\000\000"- 00:11:31.945 [2024-04-24 10:06:45.132453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f03b2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.132477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:45.132537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1a2f2f2f cdw11:2f0a2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.132551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.945 [2024-04-24 10:06:45.132607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.132624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.945 #42 NEW cov: 11691 ft: 13659 corp: 14/402b lim: 40 exec/s: 0 rss: 69Mb L: 26/40 MS: 1 ChangeByte- 00:11:31.945 [2024-04-24 10:06:45.172557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.945 [2024-04-24 10:06:45.172581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.946 [2024-04-24 10:06:45.172639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.946 [2024-04-24 10:06:45.172653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.946 [2024-04-24 10:06:45.172712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.946 [2024-04-24 10:06:45.172725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.946 #43 NEW cov: 11691 ft: 13677 corp: 15/428b lim: 40 exec/s: 0 rss: 69Mb L: 26/40 MS: 1 CMP- DE: "\000\000\000\002"- 00:11:31.946 [2024-04-24 10:06:45.212710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f03b2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.946 [2024-04-24 10:06:45.212735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.946 [2024-04-24 10:06:45.212790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1a2f2f2f cdw11:2f0a2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.946 [2024-04-24 10:06:45.212806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.946 [2024-04-24 10:06:45.212865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.946 [2024-04-24 10:06:45.212879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.205 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:32.205 #44 NEW cov: 11714 ft: 13743 corp: 16/453b lim: 40 exec/s: 0 rss: 69Mb L: 25/40 MS: 1 EraseBytes- 00:11:32.205 [2024-04-24 10:06:45.262790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f6f20000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.262816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.205 [2024-04-24 10:06:45.262874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.262888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.205 [2024-04-24 10:06:45.262946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.262959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.205 #47 NEW cov: 11714 ft: 13762 corp: 17/477b lim: 40 exec/s: 0 rss: 69Mb L: 24/40 MS: 3 CopyPart-ChangeBinInt-InsertRepeatedBytes- 00:11:32.205 [2024-04-24 10:06:45.302910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.302938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.205 [2024-04-24 10:06:45.302996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f0a2f3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.303009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.205 [2024-04-24 10:06:45.303067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.303081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.205 #48 NEW cov: 11714 ft: 13776 corp: 18/504b lim: 40 exec/s: 0 rss: 69Mb L: 27/40 MS: 1 InsertByte- 00:11:32.205 [2024-04-24 10:06:45.343018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e2f2f2f cdw11:2f2f6b2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.343043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.205 [2024-04-24 10:06:45.343104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.343136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.205 [2024-04-24 10:06:45.343196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.343210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.205 #49 NEW cov: 11714 ft: 13791 corp: 19/531b lim: 40 exec/s: 49 rss: 69Mb L: 27/40 MS: 1 ChangeByte- 00:11:32.205 [2024-04-24 10:06:45.383128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2f2f2f2f cdw11:1a2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.383153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.205 [2024-04-24 10:06:45.383213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f0a2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.383226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.205 [2024-04-24 10:06:45.383283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2ff20000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.383297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.205 #53 NEW cov: 11714 ft: 13801 corp: 20/557b lim: 40 exec/s: 53 rss: 69Mb L: 26/40 MS: 4 ChangeByte-InsertByte-CMP-CrossOver- DE: "\000\000"- 00:11:32.205 [2024-04-24 10:06:45.413281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e2f2f2f cdw11:2f2f6b2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.413308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.205 [2024-04-24 10:06:45.413366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.413381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.205 [2024-04-24 10:06:45.413437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.205 [2024-04-24 10:06:45.413454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.205 #54 NEW cov: 11714 ft: 13909 corp: 21/584b lim: 40 exec/s: 54 rss: 70Mb L: 27/40 MS: 1 ChangeByte- 00:11:32.205 [2024-04-24 10:06:45.453616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2fff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.206 [2024-04-24 10:06:45.453640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.206 [2024-04-24 10:06:45.453699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.206 [2024-04-24 10:06:45.453713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.206 [2024-04-24 10:06:45.453769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.206 [2024-04-24 10:06:45.453784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.206 [2024-04-24 10:06:45.453840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:2f2f2f0a cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.206 [2024-04-24 10:06:45.453854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:32.206 [2024-04-24 10:06:45.453911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2f2f2f2f cdw11:2f2fd74c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.206 [2024-04-24 10:06:45.453925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:32.206 #55 NEW cov: 11714 ft: 13919 corp: 22/624b lim: 40 exec/s: 55 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:11:32.465 [2024-04-24 10:06:45.493494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.465 [2024-04-24 10:06:45.493520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.465 [2024-04-24 10:06:45.493579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.465 [2024-04-24 10:06:45.493593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.465 [2024-04-24 10:06:45.493649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.465 [2024-04-24 10:06:45.493664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.465 #56 NEW cov: 11714 ft: 13936 corp: 23/650b lim: 40 exec/s: 56 rss: 70Mb L: 26/40 MS: 1 CrossOver- 00:11:32.465 [2024-04-24 10:06:45.533589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f6bf0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.465 [2024-04-24 10:06:45.533613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.465 [2024-04-24 10:06:45.533670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f00 cdw11:0000022f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.465 [2024-04-24 10:06:45.533684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.465 [2024-04-24 10:06:45.533740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.465 [2024-04-24 10:06:45.533757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.465 #57 NEW cov: 11714 ft: 13996 corp: 24/681b lim: 40 exec/s: 57 rss: 70Mb L: 31/40 MS: 1 CrossOver- 00:11:32.465 [2024-04-24 10:06:45.573668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f03b2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.465 [2024-04-24 10:06:45.573692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.465 [2024-04-24 10:06:45.573752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f1a2f2f cdw11:2f2f0a2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.465 [2024-04-24 10:06:45.573766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.465 [2024-04-24 10:06:45.573825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.465 [2024-04-24 10:06:45.573839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.465 #63 NEW cov: 11714 ft: 14018 corp: 25/707b lim: 40 exec/s: 63 rss: 70Mb L: 26/40 MS: 1 CopyPart- 00:11:32.465 [2024-04-24 10:06:45.613592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:feff0000 cdw11:feff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.465 [2024-04-24 10:06:45.613616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.465 #68 NEW cov: 11714 ft: 14401 corp: 26/716b lim: 40 exec/s: 68 rss: 70Mb L: 9/40 MS: 5 CopyPart-ChangeBit-ChangeByte-PersAutoDict-CopyPart- DE: "\376\377\000\000"- 00:11:32.465 [2024-04-24 10:06:45.653829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.465 [2024-04-24 10:06:45.653853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.465 [2024-04-24 10:06:45.653912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f0a2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.466 [2024-04-24 10:06:45.653928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.466 #69 NEW cov: 11714 ft: 14594 corp: 27/738b lim: 40 exec/s: 69 rss: 70Mb L: 22/40 MS: 1 EraseBytes- 00:11:32.466 [2024-04-24 10:06:45.694049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f6b2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.466 [2024-04-24 10:06:45.694078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.466 [2024-04-24 10:06:45.694138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.466 [2024-04-24 10:06:45.694153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.466 [2024-04-24 10:06:45.694211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.466 [2024-04-24 10:06:45.694225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.466 #70 NEW cov: 11714 ft: 14611 corp: 28/764b lim: 40 exec/s: 70 rss: 70Mb L: 26/40 MS: 1 EraseBytes- 00:11:32.466 [2024-04-24 10:06:45.734228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f03b2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.466 [2024-04-24 10:06:45.734257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.466 [2024-04-24 10:06:45.734317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f1a2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.466 [2024-04-24 10:06:45.734330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.466 [2024-04-24 10:06:45.734388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0a2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.466 [2024-04-24 10:06:45.734402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.725 #71 NEW cov: 11714 ft: 14626 corp: 29/792b lim: 40 exec/s: 71 rss: 70Mb L: 28/40 MS: 1 CrossOver- 00:11:32.725 [2024-04-24 10:06:45.774352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f6b2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.774375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.774435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.774449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.774506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.774521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.725 #72 NEW cov: 11714 ft: 14631 corp: 30/818b lim: 40 exec/s: 72 rss: 70Mb L: 26/40 MS: 1 CrossOver- 00:11:32.725 [2024-04-24 10:06:45.814460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02ffeff cdw11:00002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.814484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.814542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.814556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.814615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.814629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.725 #73 NEW cov: 11714 ft: 14660 corp: 31/848b lim: 40 exec/s: 73 rss: 70Mb L: 30/40 MS: 1 PersAutoDict- DE: "\376\377\000\000"- 00:11:32.725 [2024-04-24 10:06:45.854667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.854692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.854751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2ffe2f2f cdw11:2f2f2fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.854764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.854825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000a2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.854838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.854896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2ff2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.854909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:32.725 #74 NEW cov: 11714 ft: 14686 corp: 32/880b lim: 40 exec/s: 74 rss: 70Mb L: 32/40 MS: 1 CopyPart- 00:11:32.725 [2024-04-24 10:06:45.894770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.894795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.894852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f0a2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.894866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.894923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:a2a2a2a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.894937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.894991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:a2a2a2a2 cdw11:a2a2a2a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.895005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:32.725 #75 NEW cov: 11714 ft: 14698 corp: 33/918b lim: 40 exec/s: 75 rss: 70Mb L: 38/40 MS: 1 InsertRepeatedBytes- 00:11:32.725 [2024-04-24 10:06:45.934747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:407e2f2f cdw11:2f2f2f6b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.934772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.934831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.934844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.934902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.934915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.725 #76 NEW cov: 11714 ft: 14723 corp: 34/946b lim: 40 exec/s: 76 rss: 70Mb L: 28/40 MS: 1 InsertByte- 00:11:32.725 [2024-04-24 10:06:45.974895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f0322f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.974919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.974979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f0a2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.974993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.725 [2024-04-24 10:06:45.975053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.725 [2024-04-24 10:06:45.975071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.725 #77 NEW cov: 11714 ft: 14728 corp: 35/972b lim: 40 exec/s: 77 rss: 70Mb L: 26/40 MS: 1 ChangeBinInt- 00:11:32.984 [2024-04-24 10:06:46.005016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2929292f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.005041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.005104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.005119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.005175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0a2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.005189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.984 #78 NEW cov: 11714 ft: 14734 corp: 36/1001b lim: 40 exec/s: 78 rss: 70Mb L: 29/40 MS: 1 InsertRepeatedBytes- 00:11:32.984 [2024-04-24 10:06:46.044822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:ffff0002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.044847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.984 #82 NEW cov: 11714 ft: 14748 corp: 37/1010b lim: 40 exec/s: 82 rss: 70Mb L: 9/40 MS: 4 ChangeBit-ChangeBit-PersAutoDict-InsertRepeatedBytes- DE: "\000\000\000\002"- 00:11:32.984 [2024-04-24 10:06:46.075220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2f2f6b2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.075245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.075304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.075318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.075375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.075389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.984 #83 NEW cov: 11714 ft: 14760 corp: 38/1036b lim: 40 exec/s: 83 rss: 70Mb L: 26/40 MS: 1 ShuffleBytes- 00:11:32.984 [2024-04-24 10:06:46.115540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2fff cdw11:fffffff5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.115566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.115627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.115642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.115701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.115715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.115772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:2f2f2f00 cdw11:0000022f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.115785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.115841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f4c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.115855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:32.984 #84 NEW cov: 11714 ft: 14771 corp: 39/1076b lim: 40 exec/s: 84 rss: 70Mb L: 40/40 MS: 1 PersAutoDict- DE: "\000\000\000\002"- 00:11:32.984 [2024-04-24 10:06:46.155599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2f2f cdw11:2929292f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.155625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.155674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.155688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.155747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0a2ffeff cdw11:00002f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.155761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.155823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.155837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:32.984 #85 NEW cov: 11714 ft: 14779 corp: 40/1109b lim: 40 exec/s: 85 rss: 70Mb L: 33/40 MS: 1 PersAutoDict- DE: "\376\377\000\000"- 00:11:32.984 [2024-04-24 10:06:46.195727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2fff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.195751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.195810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.195824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.195884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.984 [2024-04-24 10:06:46.195897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:32.984 [2024-04-24 10:06:46.195955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0a2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.985 [2024-04-24 10:06:46.195968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:32.985 [2024-04-24 10:06:46.196028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2f2f2f00 cdw11:2f2f2f0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.985 [2024-04-24 10:06:46.196041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:32.985 #86 NEW cov: 11714 ft: 14801 corp: 41/1149b lim: 40 exec/s: 86 rss: 71Mb L: 40/40 MS: 1 CrossOver- 00:11:32.985 [2024-04-24 10:06:46.235526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00002929 cdw11:29292929 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.985 [2024-04-24 10:06:46.235551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:32.985 [2024-04-24 10:06:46.235612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:29292929 cdw11:2929ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:32.985 [2024-04-24 10:06:46.235626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:33.244 #87 NEW cov: 11714 ft: 14856 corp: 42/1170b lim: 40 exec/s: 87 rss: 71Mb L: 21/40 MS: 1 InsertRepeatedBytes- 00:11:33.244 [2024-04-24 10:06:46.275946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02f2fff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:33.244 [2024-04-24 10:06:46.275972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:33.244 [2024-04-24 10:06:46.276031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff280000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:33.244 [2024-04-24 10:06:46.276045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:33.244 [2024-04-24 10:06:46.276106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:33.244 [2024-04-24 10:06:46.276120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:33.244 [2024-04-24 10:06:46.276176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:2f2f2f0a cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:33.244 [2024-04-24 10:06:46.276190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:33.244 [2024-04-24 10:06:46.276247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f4c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:33.244 [2024-04-24 10:06:46.276260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:33.244 #88 NEW cov: 11714 ft: 14857 corp: 43/1210b lim: 40 exec/s: 88 rss: 71Mb L: 40/40 MS: 1 ChangeBinInt- 00:11:33.244 [2024-04-24 10:06:46.315813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f02ffeff cdw11:00002f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:33.244 [2024-04-24 10:06:46.315839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:33.244 [2024-04-24 10:06:46.315897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1e000002 cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:33.244 [2024-04-24 10:06:46.315911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:33.244 [2024-04-24 10:06:46.315969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:2f2f2f2f cdw11:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:33.244 [2024-04-24 10:06:46.315986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:33.244 #89 NEW cov: 11714 ft: 14915 corp: 44/1240b lim: 40 exec/s: 44 rss: 71Mb L: 30/40 MS: 1 ChangeBinInt- 00:11:33.244 #89 DONE cov: 11714 ft: 14915 corp: 44/1240b lim: 40 exec/s: 44 rss: 71Mb 00:11:33.244 ###### Recommended dictionary. ###### 00:11:33.244 "\376\377\000\000" # Uses: 4 00:11:33.244 "\000\000\000\002" # Uses: 2 00:11:33.244 "\000\000" # Uses: 0 00:11:33.244 ###### End of recommended dictionary. ###### 00:11:33.244 Done 89 runs in 2 second(s) 00:11:33.244 10:06:46 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_13.conf 00:11:33.244 10:06:46 -- ../common.sh@72 -- # (( i++ )) 00:11:33.244 10:06:46 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:33.244 10:06:46 -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:11:33.244 10:06:46 -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:11:33.244 10:06:46 -- nvmf/run.sh@24 -- # local timen=1 00:11:33.244 10:06:46 -- nvmf/run.sh@25 -- # local core=0x1 00:11:33.244 10:06:46 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:11:33.244 10:06:46 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:11:33.244 10:06:46 -- nvmf/run.sh@29 -- # printf %02d 14 00:11:33.244 10:06:46 -- nvmf/run.sh@29 -- # port=4414 00:11:33.244 10:06:46 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:11:33.244 10:06:46 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:11:33.244 10:06:46 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:33.244 10:06:46 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 -r /var/tmp/spdk14.sock 00:11:33.244 [2024-04-24 10:06:46.509307] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:33.244 [2024-04-24 10:06:46.509371] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1172988 ] 00:11:33.503 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.503 [2024-04-24 10:06:46.694395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.503 [2024-04-24 10:06:46.763758] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:33.503 [2024-04-24 10:06:46.763902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.792 [2024-04-24 10:06:46.823085] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.792 [2024-04-24 10:06:46.839277] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:11:33.792 INFO: Running with entropic power schedule (0xFF, 100). 00:11:33.792 INFO: Seed: 4141906576 00:11:33.792 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:33.792 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:33.792 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:11:33.792 INFO: A corpus is not provided, starting from an empty corpus 00:11:33.792 #2 INITED exec/s: 0 rss: 61Mb 00:11:33.792 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:33.792 This may also happen if the target rejected all inputs we tried so far 00:11:33.792 [2024-04-24 10:06:46.884114] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:33.792 [2024-04-24 10:06:46.884148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:33.792 [2024-04-24 10:06:46.884199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:33.792 [2024-04-24 10:06:46.884217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.050 NEW_FUNC[1/664]: 0x493c30 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:11:34.050 NEW_FUNC[2/664]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:34.050 #4 NEW cov: 11481 ft: 11482 corp: 2/21b lim: 35 exec/s: 0 rss: 69Mb L: 20/20 MS: 2 ChangeByte-InsertRepeatedBytes- 00:11:34.050 [2024-04-24 10:06:47.225033] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.050 [2024-04-24 10:06:47.225085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.050 [2024-04-24 10:06:47.225137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.050 [2024-04-24 10:06:47.225154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.050 [2024-04-24 10:06:47.225184] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.050 [2024-04-24 10:06:47.225201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:34.050 #10 NEW cov: 11594 ft: 11990 corp: 3/45b lim: 35 exec/s: 0 rss: 69Mb L: 24/24 MS: 1 CrossOver- 00:11:34.050 [2024-04-24 10:06:47.295097] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.050 [2024-04-24 10:06:47.295132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.050 [2024-04-24 10:06:47.295182] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.050 [2024-04-24 10:06:47.295199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.050 [2024-04-24 10:06:47.295229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.050 [2024-04-24 10:06:47.295246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:34.309 #11 NEW cov: 11600 ft: 12325 corp: 4/66b lim: 35 exec/s: 0 rss: 69Mb L: 21/24 MS: 1 InsertByte- 00:11:34.309 [2024-04-24 10:06:47.345236] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.309 [2024-04-24 10:06:47.345267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.309 [2024-04-24 10:06:47.345318] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.309 [2024-04-24 10:06:47.345336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.309 [2024-04-24 10:06:47.345367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.309 [2024-04-24 10:06:47.345384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:34.309 #12 NEW cov: 11685 ft: 12652 corp: 5/89b lim: 35 exec/s: 0 rss: 69Mb L: 23/24 MS: 1 CMP- DE: "\014\000"- 00:11:34.309 [2024-04-24 10:06:47.405359] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.309 [2024-04-24 10:06:47.405390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.309 [2024-04-24 10:06:47.405428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.309 [2024-04-24 10:06:47.405449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.309 #18 NEW cov: 11685 ft: 12809 corp: 6/103b lim: 35 exec/s: 0 rss: 69Mb L: 14/24 MS: 1 EraseBytes- 00:11:34.309 [2024-04-24 10:06:47.455567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.309 [2024-04-24 10:06:47.455600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.309 [2024-04-24 10:06:47.455649] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.309 [2024-04-24 10:06:47.455665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.309 [2024-04-24 10:06:47.455695] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.309 [2024-04-24 10:06:47.455711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:34.309 #23 NEW cov: 11692 ft: 12884 corp: 7/129b lim: 35 exec/s: 0 rss: 69Mb L: 26/26 MS: 5 ChangeByte-CopyPart-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:11:34.309 [2024-04-24 10:06:47.505779] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.309 [2024-04-24 10:06:47.505811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.309 [2024-04-24 10:06:47.505846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.309 [2024-04-24 10:06:47.505862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.309 [2024-04-24 10:06:47.505892] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.310 [2024-04-24 10:06:47.505907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:34.310 [2024-04-24 10:06:47.505937] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.310 [2024-04-24 10:06:47.505952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:34.310 #24 NEW cov: 11692 ft: 13236 corp: 8/160b lim: 35 exec/s: 0 rss: 69Mb L: 31/31 MS: 1 CopyPart- 00:11:34.310 [2024-04-24 10:06:47.575876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.310 [2024-04-24 10:06:47.575908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.310 [2024-04-24 10:06:47.575957] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.310 [2024-04-24 10:06:47.575973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.310 [2024-04-24 10:06:47.576004] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.310 [2024-04-24 10:06:47.576020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:34.568 #25 NEW cov: 11692 ft: 13312 corp: 9/182b lim: 35 exec/s: 0 rss: 69Mb L: 22/31 MS: 1 InsertByte- 00:11:34.568 [2024-04-24 10:06:47.626051] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.568 [2024-04-24 10:06:47.626101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.568 [2024-04-24 10:06:47.626137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.568 [2024-04-24 10:06:47.626154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.568 [2024-04-24 10:06:47.626184] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.568 [2024-04-24 10:06:47.626200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:34.568 #26 NEW cov: 11692 ft: 13386 corp: 10/204b lim: 35 exec/s: 0 rss: 69Mb L: 22/31 MS: 1 ChangeBit- 00:11:34.568 [2024-04-24 10:06:47.686178] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.568 [2024-04-24 10:06:47.686209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.568 [2024-04-24 10:06:47.686258] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.569 [2024-04-24 10:06:47.686275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.569 [2024-04-24 10:06:47.686306] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.569 [2024-04-24 10:06:47.686322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:34.569 #27 NEW cov: 11692 ft: 13515 corp: 11/227b lim: 35 exec/s: 0 rss: 70Mb L: 23/31 MS: 1 ChangeBit- 00:11:34.569 [2024-04-24 10:06:47.746381] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.569 [2024-04-24 10:06:47.746418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.569 [2024-04-24 10:06:47.746457] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.569 [2024-04-24 10:06:47.746477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.569 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:34.569 #28 NEW cov: 11715 ft: 13639 corp: 12/241b lim: 35 exec/s: 0 rss: 70Mb L: 14/31 MS: 1 PersAutoDict- DE: "\014\000"- 00:11:34.569 [2024-04-24 10:06:47.816660] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.569 [2024-04-24 10:06:47.816693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.569 [2024-04-24 10:06:47.816728] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.569 [2024-04-24 10:06:47.816745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.569 [2024-04-24 10:06:47.816777] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.569 [2024-04-24 10:06:47.816792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:34.569 [2024-04-24 10:06:47.816823] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.569 [2024-04-24 10:06:47.816843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:34.827 #29 NEW cov: 11715 ft: 13727 corp: 13/272b lim: 35 exec/s: 0 rss: 70Mb L: 31/31 MS: 1 CrossOver- 00:11:34.827 [2024-04-24 10:06:47.876618] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.827 [2024-04-24 10:06:47.876649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.827 [2024-04-24 10:06:47.876699] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.827 [2024-04-24 10:06:47.876716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.827 #33 NEW cov: 11715 ft: 13742 corp: 14/291b lim: 35 exec/s: 33 rss: 70Mb L: 19/31 MS: 4 PersAutoDict-CrossOver-EraseBytes-CrossOver- DE: "\014\000"- 00:11:34.827 [2024-04-24 10:06:47.926763] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000053 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.827 [2024-04-24 10:06:47.926793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.827 [2024-04-24 10:06:47.926828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000053 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.827 [2024-04-24 10:06:47.926843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.827 #36 NEW cov: 11715 ft: 13759 corp: 15/309b lim: 35 exec/s: 36 rss: 70Mb L: 18/31 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:11:34.827 [2024-04-24 10:06:47.976849] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000053 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.827 [2024-04-24 10:06:47.976878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.827 [2024-04-24 10:06:47.976927] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000053 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.827 [2024-04-24 10:06:47.976943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.827 #37 NEW cov: 11715 ft: 13854 corp: 16/327b lim: 35 exec/s: 37 rss: 70Mb L: 18/31 MS: 1 ChangeBit- 00:11:34.827 [2024-04-24 10:06:48.037073] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.827 [2024-04-24 10:06:48.037103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.827 [2024-04-24 10:06:48.037152] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.827 [2024-04-24 10:06:48.037170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.827 [2024-04-24 10:06:48.037200] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.827 [2024-04-24 10:06:48.037217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:34.827 #38 NEW cov: 11715 ft: 13871 corp: 17/350b lim: 35 exec/s: 38 rss: 70Mb L: 23/31 MS: 1 ChangeBit- 00:11:34.827 [2024-04-24 10:06:48.097231] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.827 [2024-04-24 10:06:48.097261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:34.827 [2024-04-24 10:06:48.097310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.827 [2024-04-24 10:06:48.097334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.827 [2024-04-24 10:06:48.097364] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:34.827 [2024-04-24 10:06:48.097380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.086 #39 NEW cov: 11715 ft: 13891 corp: 18/373b lim: 35 exec/s: 39 rss: 70Mb L: 23/31 MS: 1 ChangeByte- 00:11:35.086 [2024-04-24 10:06:48.147360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.086 [2024-04-24 10:06:48.147390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.086 [2024-04-24 10:06:48.147438] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.086 [2024-04-24 10:06:48.147455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.086 [2024-04-24 10:06:48.147485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.086 [2024-04-24 10:06:48.147501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.086 #40 NEW cov: 11715 ft: 13925 corp: 19/400b lim: 35 exec/s: 40 rss: 70Mb L: 27/31 MS: 1 InsertRepeatedBytes- 00:11:35.086 [2024-04-24 10:06:48.207493] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.086 [2024-04-24 10:06:48.207523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.086 [2024-04-24 10:06:48.207572] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000fd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.086 [2024-04-24 10:06:48.207590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.086 #41 NEW cov: 11715 ft: 13960 corp: 20/414b lim: 35 exec/s: 41 rss: 70Mb L: 14/31 MS: 1 ChangeBinInt- 00:11:35.086 [2024-04-24 10:06:48.267842] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.086 [2024-04-24 10:06:48.267872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.086 [2024-04-24 10:06:48.267906] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.086 [2024-04-24 10:06:48.267922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.086 [2024-04-24 10:06:48.267952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.086 [2024-04-24 10:06:48.267968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.086 [2024-04-24 10:06:48.267998] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.086 [2024-04-24 10:06:48.268013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:35.086 #42 NEW cov: 11715 ft: 14053 corp: 21/445b lim: 35 exec/s: 42 rss: 70Mb L: 31/31 MS: 1 ChangeBit- 00:11:35.086 [2024-04-24 10:06:48.327783] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.086 [2024-04-24 10:06:48.327817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.345 #46 NEW cov: 11715 ft: 14770 corp: 22/452b lim: 35 exec/s: 46 rss: 70Mb L: 7/31 MS: 4 CrossOver-CopyPart-PersAutoDict-InsertRepeatedBytes- DE: "\014\000"- 00:11:35.345 [2024-04-24 10:06:48.388077] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.345 [2024-04-24 10:06:48.388106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.345 [2024-04-24 10:06:48.388155] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.345 [2024-04-24 10:06:48.388172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.345 [2024-04-24 10:06:48.388202] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.345 [2024-04-24 10:06:48.388219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.345 #47 NEW cov: 11715 ft: 14812 corp: 23/479b lim: 35 exec/s: 47 rss: 71Mb L: 27/31 MS: 1 ChangeBit- 00:11:35.345 [2024-04-24 10:06:48.448141] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.345 [2024-04-24 10:06:48.448170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.345 [2024-04-24 10:06:48.448220] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.345 [2024-04-24 10:06:48.448237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.345 #48 NEW cov: 11715 ft: 14818 corp: 24/493b lim: 35 exec/s: 48 rss: 71Mb L: 14/31 MS: 1 ChangeByte- 00:11:35.345 [2024-04-24 10:06:48.498220] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.345 [2024-04-24 10:06:48.498249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.345 [2024-04-24 10:06:48.498298] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES AUTONOMOUS POWER STATE TRANSITION cid:5 cdw10:0000000c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.345 [2024-04-24 10:06:48.498315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.345 #49 NEW cov: 11715 ft: 14827 corp: 25/509b lim: 35 exec/s: 49 rss: 71Mb L: 16/31 MS: 1 PersAutoDict- DE: "\014\000"- 00:11:35.345 [2024-04-24 10:06:48.548430] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.345 [2024-04-24 10:06:48.548475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.345 [2024-04-24 10:06:48.548509] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.345 [2024-04-24 10:06:48.548526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.345 [2024-04-24 10:06:48.548556] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.345 [2024-04-24 10:06:48.548573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.345 #50 NEW cov: 11715 ft: 14851 corp: 26/533b lim: 35 exec/s: 50 rss: 71Mb L: 24/31 MS: 1 CrossOver- 00:11:35.345 [2024-04-24 10:06:48.608647] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.345 [2024-04-24 10:06:48.608680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.345 [2024-04-24 10:06:48.608729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.345 [2024-04-24 10:06:48.608746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.345 [2024-04-24 10:06:48.608776] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.345 [2024-04-24 10:06:48.608792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.604 #51 NEW cov: 11715 ft: 14911 corp: 27/556b lim: 35 exec/s: 51 rss: 71Mb L: 23/31 MS: 1 ShuffleBytes- 00:11:35.604 [2024-04-24 10:06:48.658786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.604 [2024-04-24 10:06:48.658817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.604 [2024-04-24 10:06:48.658851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.604 [2024-04-24 10:06:48.658869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.604 [2024-04-24 10:06:48.658900] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.604 [2024-04-24 10:06:48.658917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.604 #52 NEW cov: 11715 ft: 14920 corp: 28/577b lim: 35 exec/s: 52 rss: 71Mb L: 21/31 MS: 1 CrossOver- 00:11:35.604 [2024-04-24 10:06:48.708866] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.604 [2024-04-24 10:06:48.708895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.604 [2024-04-24 10:06:48.708945] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.604 [2024-04-24 10:06:48.708962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.605 [2024-04-24 10:06:48.708992] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.605 [2024-04-24 10:06:48.709008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.605 #53 NEW cov: 11715 ft: 14939 corp: 29/600b lim: 35 exec/s: 53 rss: 71Mb L: 23/31 MS: 1 ChangeByte- 00:11:35.605 [2024-04-24 10:06:48.769087] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.605 [2024-04-24 10:06:48.769118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.605 [2024-04-24 10:06:48.769152] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.605 [2024-04-24 10:06:48.769169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.605 [2024-04-24 10:06:48.769199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.605 [2024-04-24 10:06:48.769215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.605 #54 NEW cov: 11715 ft: 14967 corp: 30/625b lim: 35 exec/s: 54 rss: 71Mb L: 25/31 MS: 1 CopyPart- 00:11:35.605 [2024-04-24 10:06:48.819212] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.605 [2024-04-24 10:06:48.819244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.605 [2024-04-24 10:06:48.819281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.605 [2024-04-24 10:06:48.819300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.605 [2024-04-24 10:06:48.819333] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.605 [2024-04-24 10:06:48.819352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.605 #55 NEW cov: 11715 ft: 14978 corp: 31/648b lim: 35 exec/s: 55 rss: 71Mb L: 23/31 MS: 1 ChangeByte- 00:11:35.605 [2024-04-24 10:06:48.879425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.605 [2024-04-24 10:06:48.879458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.605 [2024-04-24 10:06:48.879494] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.605 [2024-04-24 10:06:48.879511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.605 [2024-04-24 10:06:48.879542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.605 [2024-04-24 10:06:48.879559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.864 #56 NEW cov: 11715 ft: 14987 corp: 32/671b lim: 35 exec/s: 28 rss: 71Mb L: 23/31 MS: 1 ShuffleBytes- 00:11:35.864 #56 DONE cov: 11715 ft: 14987 corp: 32/671b lim: 35 exec/s: 28 rss: 71Mb 00:11:35.864 ###### Recommended dictionary. ###### 00:11:35.864 "\014\000" # Uses: 4 00:11:35.864 ###### End of recommended dictionary. ###### 00:11:35.864 Done 56 runs in 2 second(s) 00:11:35.864 10:06:49 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_14.conf 00:11:35.864 10:06:49 -- ../common.sh@72 -- # (( i++ )) 00:11:35.864 10:06:49 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:35.864 10:06:49 -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:11:35.864 10:06:49 -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:11:35.864 10:06:49 -- nvmf/run.sh@24 -- # local timen=1 00:11:35.865 10:06:49 -- nvmf/run.sh@25 -- # local core=0x1 00:11:35.865 10:06:49 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:11:35.865 10:06:49 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:11:35.865 10:06:49 -- nvmf/run.sh@29 -- # printf %02d 15 00:11:35.865 10:06:49 -- nvmf/run.sh@29 -- # port=4415 00:11:35.865 10:06:49 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:11:35.865 10:06:49 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:11:35.865 10:06:49 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:35.865 10:06:49 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 -r /var/tmp/spdk15.sock 00:11:35.865 [2024-04-24 10:06:49.082161] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:35.865 [2024-04-24 10:06:49.082239] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173325 ] 00:11:35.865 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.123 [2024-04-24 10:06:49.374893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.382 [2024-04-24 10:06:49.467790] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:36.382 [2024-04-24 10:06:49.467921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.382 [2024-04-24 10:06:49.526372] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.383 [2024-04-24 10:06:49.542563] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:11:36.383 INFO: Running with entropic power schedule (0xFF, 100). 00:11:36.383 INFO: Seed: 2548957076 00:11:36.383 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:36.383 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:36.383 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:11:36.383 INFO: A corpus is not provided, starting from an empty corpus 00:11:36.383 #2 INITED exec/s: 0 rss: 61Mb 00:11:36.383 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:36.383 This may also happen if the target rejected all inputs we tried so far 00:11:36.383 [2024-04-24 10:06:49.587370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.383 [2024-04-24 10:06:49.587405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.383 [2024-04-24 10:06:49.587456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.383 [2024-04-24 10:06:49.587473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.641 NEW_FUNC[1/663]: 0x495170 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:11:36.641 NEW_FUNC[2/663]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:36.641 #3 NEW cov: 11469 ft: 11465 corp: 2/19b lim: 35 exec/s: 0 rss: 68Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:11:36.901 [2024-04-24 10:06:49.928189] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000063d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.902 [2024-04-24 10:06:49.928237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.902 [2024-04-24 10:06:49.928288] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.902 [2024-04-24 10:06:49.928305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.902 #9 NEW cov: 11582 ft: 11887 corp: 3/37b lim: 35 exec/s: 0 rss: 68Mb L: 18/18 MS: 1 ChangeByte- 00:11:36.902 [2024-04-24 10:06:49.988315] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000063d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.902 [2024-04-24 10:06:49.988348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.902 [2024-04-24 10:06:49.988399] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.902 [2024-04-24 10:06:49.988416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.902 [2024-04-24 10:06:49.988447] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.902 [2024-04-24 10:06:49.988468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.902 [2024-04-24 10:06:49.988498] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.902 [2024-04-24 10:06:49.988514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.902 #10 NEW cov: 11588 ft: 12624 corp: 4/67b lim: 35 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:11:36.902 [2024-04-24 10:06:50.058660] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000063d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.902 [2024-04-24 10:06:50.058706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.902 [2024-04-24 10:06:50.058743] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.902 [2024-04-24 10:06:50.058760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.902 [2024-04-24 10:06:50.058792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000073d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.902 [2024-04-24 10:06:50.058808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.902 [2024-04-24 10:06:50.058839] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.902 [2024-04-24 10:06:50.058855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.902 [2024-04-24 10:06:50.058886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.902 [2024-04-24 10:06:50.058902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:36.902 #11 NEW cov: 11673 ft: 12989 corp: 5/102b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 CopyPart- 00:11:36.902 [2024-04-24 10:06:50.128619] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000062 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:36.902 [2024-04-24 10:06:50.128665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.902 #13 NEW cov: 11673 ft: 13526 corp: 6/109b lim: 35 exec/s: 0 rss: 68Mb L: 7/35 MS: 2 ChangeByte-InsertRepeatedBytes- 00:11:37.161 [2024-04-24 10:06:50.188722] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000062 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.161 [2024-04-24 10:06:50.188758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.161 #14 NEW cov: 11673 ft: 13586 corp: 7/117b lim: 35 exec/s: 0 rss: 68Mb L: 8/35 MS: 1 InsertByte- 00:11:37.161 [2024-04-24 10:06:50.258940] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000062 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.161 [2024-04-24 10:06:50.258974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.161 NEW_FUNC[1/1]: 0x4b25d0 in feat_volatile_write_cache /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:312 00:11:37.161 #15 NEW cov: 11687 ft: 13666 corp: 8/131b lim: 35 exec/s: 0 rss: 68Mb L: 14/35 MS: 1 CrossOver- 00:11:37.161 [2024-04-24 10:06:50.319039] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007fe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.161 [2024-04-24 10:06:50.319080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.161 #20 NEW cov: 11687 ft: 13772 corp: 9/141b lim: 35 exec/s: 0 rss: 68Mb L: 10/35 MS: 5 EraseBytes-ChangeBit-CMP-InsertByte-CMP- DE: "\376\377\377\377"-"R\000\000\000"- 00:11:37.161 [2024-04-24 10:06:50.369214] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007fe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.161 [2024-04-24 10:06:50.369246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.161 #21 NEW cov: 11687 ft: 13800 corp: 10/151b lim: 35 exec/s: 0 rss: 68Mb L: 10/35 MS: 1 ChangeByte- 00:11:37.420 [2024-04-24 10:06:50.439450] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.439482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.420 [2024-04-24 10:06:50.439519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.439536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.420 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:37.420 #22 NEW cov: 11710 ft: 13863 corp: 11/169b lim: 35 exec/s: 0 rss: 68Mb L: 18/35 MS: 1 ChangeBit- 00:11:37.420 [2024-04-24 10:06:50.489500] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000798 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.489531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.420 #23 NEW cov: 11710 ft: 13916 corp: 12/177b lim: 35 exec/s: 0 rss: 69Mb L: 8/35 MS: 1 ChangeBinInt- 00:11:37.420 [2024-04-24 10:06:50.549869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000063d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.549900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.420 [2024-04-24 10:06:50.549951] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.549968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.420 [2024-04-24 10:06:50.550000] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.550016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:37.420 [2024-04-24 10:06:50.550047] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.550070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:37.420 #24 NEW cov: 11710 ft: 13969 corp: 13/210b lim: 35 exec/s: 24 rss: 69Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:11:37.420 [2024-04-24 10:06:50.599908] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000062 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.599938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.420 [2024-04-24 10:06:50.599990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.600006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.420 [2024-04-24 10:06:50.600038] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.600058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:37.420 #25 NEW cov: 11710 ft: 14169 corp: 14/232b lim: 35 exec/s: 25 rss: 69Mb L: 22/35 MS: 1 InsertRepeatedBytes- 00:11:37.420 [2024-04-24 10:06:50.660106] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000063d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.660136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.420 [2024-04-24 10:06:50.660171] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.660187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.420 [2024-04-24 10:06:50.660218] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.660233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:37.420 [2024-04-24 10:06:50.660264] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.420 [2024-04-24 10:06:50.660279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:37.680 #26 NEW cov: 11710 ft: 14187 corp: 15/265b lim: 35 exec/s: 26 rss: 69Mb L: 33/35 MS: 1 ChangeByte- 00:11:37.680 [2024-04-24 10:06:50.730315] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000063d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.730347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.680 [2024-04-24 10:06:50.730383] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.730399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.680 [2024-04-24 10:06:50.730431] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.730446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:37.680 [2024-04-24 10:06:50.730477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.730493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:37.680 #27 NEW cov: 11710 ft: 14205 corp: 16/298b lim: 35 exec/s: 27 rss: 69Mb L: 33/35 MS: 1 ChangeBinInt- 00:11:37.680 [2024-04-24 10:06:50.780206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.780235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.680 #32 NEW cov: 11710 ft: 14265 corp: 17/311b lim: 35 exec/s: 32 rss: 69Mb L: 13/35 MS: 5 EraseBytes-ChangeByte-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:11:37.680 [2024-04-24 10:06:50.830601] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000063d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.830635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.680 [2024-04-24 10:06:50.830670] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.830686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.680 [2024-04-24 10:06:50.830722] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000073d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.830737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:37.680 [2024-04-24 10:06:50.830768] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.830784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:37.680 [2024-04-24 10:06:50.830816] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000001ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.830832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:37.680 #33 NEW cov: 11710 ft: 14342 corp: 18/346b lim: 35 exec/s: 33 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:11:37.680 [2024-04-24 10:06:50.890554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.890584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.680 [2024-04-24 10:06:50.890617] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.890631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.680 #34 NEW cov: 11710 ft: 14381 corp: 19/364b lim: 35 exec/s: 34 rss: 69Mb L: 18/35 MS: 1 ChangeBit- 00:11:37.680 [2024-04-24 10:06:50.950780] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.950810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.680 [2024-04-24 10:06:50.950859] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.950875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.680 [2024-04-24 10:06:50.950905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.680 [2024-04-24 10:06:50.950921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:37.939 #35 NEW cov: 11710 ft: 14412 corp: 20/388b lim: 35 exec/s: 35 rss: 69Mb L: 24/35 MS: 1 CrossOver- 00:11:37.939 [2024-04-24 10:06:51.000790] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007fe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.939 [2024-04-24 10:06:51.000818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.939 #36 NEW cov: 11710 ft: 14426 corp: 21/398b lim: 35 exec/s: 36 rss: 69Mb L: 10/35 MS: 1 PersAutoDict- DE: "R\000\000\000"- 00:11:37.939 [2024-04-24 10:06:51.060947] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000798 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.939 [2024-04-24 10:06:51.060976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.939 #37 NEW cov: 11710 ft: 14499 corp: 22/406b lim: 35 exec/s: 37 rss: 69Mb L: 8/35 MS: 1 ChangeBit- 00:11:37.939 [2024-04-24 10:06:51.121181] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000062 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.939 [2024-04-24 10:06:51.121210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.939 #38 NEW cov: 11710 ft: 14566 corp: 23/421b lim: 35 exec/s: 38 rss: 69Mb L: 15/35 MS: 1 InsertByte- 00:11:37.939 [2024-04-24 10:06:51.181285] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000798 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.939 [2024-04-24 10:06:51.181314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.939 #39 NEW cov: 11710 ft: 14606 corp: 24/433b lim: 35 exec/s: 39 rss: 69Mb L: 12/35 MS: 1 CMP- DE: "\377\377\377\377"- 00:11:38.198 [2024-04-24 10:06:51.231591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000063d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.231621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.199 [2024-04-24 10:06:51.231670] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.231686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.199 [2024-04-24 10:06:51.231716] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000001ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.231731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:38.199 [2024-04-24 10:06:51.231762] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.231777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:38.199 #40 NEW cov: 11710 ft: 14637 corp: 25/463b lim: 35 exec/s: 40 rss: 69Mb L: 30/35 MS: 1 ShuffleBytes- 00:11:38.199 [2024-04-24 10:06:51.281549] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007fe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.281578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.199 #41 NEW cov: 11710 ft: 14644 corp: 26/474b lim: 35 exec/s: 41 rss: 69Mb L: 11/35 MS: 1 InsertByte- 00:11:38.199 [2024-04-24 10:06:51.331874] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000062 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.331902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.199 [2024-04-24 10:06:51.331952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.331968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.199 [2024-04-24 10:06:51.331999] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000002c9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.332014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:38.199 [2024-04-24 10:06:51.332045] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000252 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.332066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:38.199 #42 NEW cov: 11710 ft: 14658 corp: 27/505b lim: 35 exec/s: 42 rss: 69Mb L: 31/35 MS: 1 InsertRepeatedBytes- 00:11:38.199 [2024-04-24 10:06:51.391984] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000063d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.392012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.199 [2024-04-24 10:06:51.392072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.392089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.199 [2024-04-24 10:06:51.392120] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000073d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.392135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:38.199 [2024-04-24 10:06:51.392165] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.392180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:38.199 #43 NEW cov: 11710 ft: 14692 corp: 28/535b lim: 35 exec/s: 43 rss: 69Mb L: 30/35 MS: 1 EraseBytes- 00:11:38.199 [2024-04-24 10:06:51.442126] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000063d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.442156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.199 [2024-04-24 10:06:51.442206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.442222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.199 [2024-04-24 10:06:51.442252] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.442267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:38.199 [2024-04-24 10:06:51.442297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.199 [2024-04-24 10:06:51.442313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:38.458 #44 NEW cov: 11710 ft: 14747 corp: 29/565b lim: 35 exec/s: 44 rss: 69Mb L: 30/35 MS: 1 ChangeBinInt- 00:11:38.458 [2024-04-24 10:06:51.492299] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000063d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.458 [2024-04-24 10:06:51.492332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.458 [2024-04-24 10:06:51.492367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.458 [2024-04-24 10:06:51.492383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.458 [2024-04-24 10:06:51.492413] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000001ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.458 [2024-04-24 10:06:51.492429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:38.458 [2024-04-24 10:06:51.492459] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.458 [2024-04-24 10:06:51.492474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:38.458 #45 NEW cov: 11710 ft: 14768 corp: 30/595b lim: 35 exec/s: 45 rss: 69Mb L: 30/35 MS: 1 ShuffleBytes- 00:11:38.458 [2024-04-24 10:06:51.552492] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000063d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.458 [2024-04-24 10:06:51.552538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.458 [2024-04-24 10:06:51.552577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.458 [2024-04-24 10:06:51.552592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.458 [2024-04-24 10:06:51.552623] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.458 [2024-04-24 10:06:51.552639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:38.458 [2024-04-24 10:06:51.552669] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:38.458 [2024-04-24 10:06:51.552684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:38.459 #46 NEW cov: 11710 ft: 14790 corp: 31/625b lim: 35 exec/s: 23 rss: 69Mb L: 30/35 MS: 1 ChangeBinInt- 00:11:38.459 #46 DONE cov: 11710 ft: 14790 corp: 31/625b lim: 35 exec/s: 23 rss: 69Mb 00:11:38.459 ###### Recommended dictionary. ###### 00:11:38.459 "\376\377\377\377" # Uses: 0 00:11:38.459 "R\000\000\000" # Uses: 1 00:11:38.459 "\377\377\377\377" # Uses: 0 00:11:38.459 ###### End of recommended dictionary. ###### 00:11:38.459 Done 46 runs in 2 second(s) 00:11:38.459 10:06:51 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_15.conf 00:11:38.459 10:06:51 -- ../common.sh@72 -- # (( i++ )) 00:11:38.459 10:06:51 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:38.459 10:06:51 -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:11:38.459 10:06:51 -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:11:38.459 10:06:51 -- nvmf/run.sh@24 -- # local timen=1 00:11:38.459 10:06:51 -- nvmf/run.sh@25 -- # local core=0x1 00:11:38.459 10:06:51 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:11:38.459 10:06:51 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:11:38.459 10:06:51 -- nvmf/run.sh@29 -- # printf %02d 16 00:11:38.459 10:06:51 -- nvmf/run.sh@29 -- # port=4416 00:11:38.459 10:06:51 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:11:38.718 10:06:51 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:11:38.718 10:06:51 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:38.718 10:06:51 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 -r /var/tmp/spdk16.sock 00:11:38.718 [2024-04-24 10:06:51.770438] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:38.718 [2024-04-24 10:06:51.770509] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173722 ] 00:11:38.718 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.977 [2024-04-24 10:06:52.067121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.977 [2024-04-24 10:06:52.159879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:38.977 [2024-04-24 10:06:52.160009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.977 [2024-04-24 10:06:52.218447] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.977 [2024-04-24 10:06:52.234654] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:11:38.977 INFO: Running with entropic power schedule (0xFF, 100). 00:11:38.977 INFO: Seed: 943993602 00:11:39.236 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:39.237 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:39.237 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:11:39.237 INFO: A corpus is not provided, starting from an empty corpus 00:11:39.237 #2 INITED exec/s: 0 rss: 61Mb 00:11:39.237 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:39.237 This may also happen if the target rejected all inputs we tried so far 00:11:39.237 [2024-04-24 10:06:52.282621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.237 [2024-04-24 10:06:52.282655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:39.237 [2024-04-24 10:06:52.282710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.237 [2024-04-24 10:06:52.282725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:39.237 [2024-04-24 10:06:52.282782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.237 [2024-04-24 10:06:52.282798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:39.495 NEW_FUNC[1/664]: 0x496620 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:11:39.495 NEW_FUNC[2/664]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:39.495 #3 NEW cov: 11572 ft: 11573 corp: 2/72b lim: 105 exec/s: 0 rss: 68Mb L: 71/71 MS: 1 InsertRepeatedBytes- 00:11:39.495 [2024-04-24 10:06:52.613373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.495 [2024-04-24 10:06:52.613416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:39.495 [2024-04-24 10:06:52.613470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.495 [2024-04-24 10:06:52.613485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:39.495 [2024-04-24 10:06:52.613538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.495 [2024-04-24 10:06:52.613552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:39.495 #4 NEW cov: 11685 ft: 12083 corp: 3/143b lim: 105 exec/s: 0 rss: 68Mb L: 71/71 MS: 1 ShuffleBytes- 00:11:39.495 [2024-04-24 10:06:52.663396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.495 [2024-04-24 10:06:52.663428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:39.495 [2024-04-24 10:06:52.663463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.495 [2024-04-24 10:06:52.663478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:39.495 [2024-04-24 10:06:52.663531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.495 [2024-04-24 10:06:52.663546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:39.495 #5 NEW cov: 11691 ft: 12323 corp: 4/215b lim: 105 exec/s: 0 rss: 68Mb L: 72/72 MS: 1 InsertByte- 00:11:39.495 [2024-04-24 10:06:52.703500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.495 [2024-04-24 10:06:52.703530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:39.495 [2024-04-24 10:06:52.703568] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.495 [2024-04-24 10:06:52.703584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:39.495 [2024-04-24 10:06:52.703637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.495 [2024-04-24 10:06:52.703651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:39.495 #6 NEW cov: 11776 ft: 12575 corp: 5/287b lim: 105 exec/s: 0 rss: 68Mb L: 72/72 MS: 1 ShuffleBytes- 00:11:39.495 [2024-04-24 10:06:52.743661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.495 [2024-04-24 10:06:52.743688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:39.495 [2024-04-24 10:06:52.743726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.495 [2024-04-24 10:06:52.743741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:39.495 [2024-04-24 10:06:52.743796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.495 [2024-04-24 10:06:52.743811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:39.495 #7 NEW cov: 11776 ft: 12706 corp: 6/358b lim: 105 exec/s: 0 rss: 68Mb L: 71/72 MS: 1 ShuffleBytes- 00:11:39.754 [2024-04-24 10:06:52.783776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.754 [2024-04-24 10:06:52.783805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:52.783859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.783876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:52.783932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.783947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:39.755 #8 NEW cov: 11776 ft: 12811 corp: 7/430b lim: 105 exec/s: 0 rss: 68Mb L: 72/72 MS: 1 CrossOver- 00:11:39.755 [2024-04-24 10:06:52.823964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.823993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:52.824032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.824048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:52.824111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:69805794224242688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.824128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:39.755 #9 NEW cov: 11776 ft: 12910 corp: 8/502b lim: 105 exec/s: 0 rss: 68Mb L: 72/72 MS: 1 ChangeBinInt- 00:11:39.755 [2024-04-24 10:06:52.863996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.864025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:52.864070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4278190080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.864086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:52.864140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.864156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:39.755 #10 NEW cov: 11776 ft: 12967 corp: 9/580b lim: 105 exec/s: 0 rss: 69Mb L: 78/78 MS: 1 InsertRepeatedBytes- 00:11:39.755 [2024-04-24 10:06:52.904110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.904138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:52.904195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.904210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:52.904262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:64513 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.904279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:39.755 #11 NEW cov: 11776 ft: 12987 corp: 10/651b lim: 105 exec/s: 0 rss: 69Mb L: 71/78 MS: 1 ChangeBinInt- 00:11:39.755 [2024-04-24 10:06:52.944203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.944232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:52.944273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.944288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:52.944342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.944358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:39.755 #12 NEW cov: 11776 ft: 13051 corp: 11/722b lim: 105 exec/s: 0 rss: 69Mb L: 71/78 MS: 1 CopyPart- 00:11:39.755 [2024-04-24 10:06:52.984344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.984372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:52.984408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4278190080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.984425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:52.984480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:52.984496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:39.755 #13 NEW cov: 11776 ft: 13065 corp: 12/800b lim: 105 exec/s: 0 rss: 69Mb L: 78/78 MS: 1 ChangeBinInt- 00:11:39.755 [2024-04-24 10:06:53.024535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6872316418184970240 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:53.024563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:53.024606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1593835556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:53.024623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:53.024677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:53.024693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:39.755 [2024-04-24 10:06:53.024747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:63488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.755 [2024-04-24 10:06:53.024761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:40.014 #14 NEW cov: 11776 ft: 13571 corp: 13/890b lim: 105 exec/s: 0 rss: 69Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:11:40.014 [2024-04-24 10:06:53.074538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.014 [2024-04-24 10:06:53.074564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.014 [2024-04-24 10:06:53.074601] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:33554432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.014 [2024-04-24 10:06:53.074616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.014 [2024-04-24 10:06:53.074670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.014 [2024-04-24 10:06:53.074686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.014 #15 NEW cov: 11776 ft: 13679 corp: 14/962b lim: 105 exec/s: 0 rss: 69Mb L: 72/90 MS: 1 ChangeBinInt- 00:11:40.014 [2024-04-24 10:06:53.114586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.014 [2024-04-24 10:06:53.114613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.014 [2024-04-24 10:06:53.114650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.014 [2024-04-24 10:06:53.114665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.014 #19 NEW cov: 11776 ft: 14042 corp: 15/1008b lim: 105 exec/s: 0 rss: 69Mb L: 46/90 MS: 4 ChangeByte-InsertByte-InsertByte-CrossOver- 00:11:40.014 [2024-04-24 10:06:53.154796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.015 [2024-04-24 10:06:53.154822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.015 [2024-04-24 10:06:53.154877] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:553648128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.015 [2024-04-24 10:06:53.154894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.015 [2024-04-24 10:06:53.154945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.015 [2024-04-24 10:06:53.154959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.015 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:40.015 #20 NEW cov: 11799 ft: 14085 corp: 16/1081b lim: 105 exec/s: 0 rss: 69Mb L: 73/90 MS: 1 InsertByte- 00:11:40.015 [2024-04-24 10:06:53.194875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.015 [2024-04-24 10:06:53.194902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.015 [2024-04-24 10:06:53.194960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.015 [2024-04-24 10:06:53.194977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.015 [2024-04-24 10:06:53.195032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.015 [2024-04-24 10:06:53.195046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.015 #21 NEW cov: 11799 ft: 14140 corp: 17/1153b lim: 105 exec/s: 0 rss: 69Mb L: 72/90 MS: 1 ChangeByte- 00:11:40.015 [2024-04-24 10:06:53.235049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.015 [2024-04-24 10:06:53.235079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.015 [2024-04-24 10:06:53.235114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.015 [2024-04-24 10:06:53.235128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.015 [2024-04-24 10:06:53.235183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.015 [2024-04-24 10:06:53.235196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.015 #22 NEW cov: 11799 ft: 14191 corp: 18/1224b lim: 105 exec/s: 0 rss: 69Mb L: 71/90 MS: 1 ShuffleBytes- 00:11:40.015 [2024-04-24 10:06:53.275101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.015 [2024-04-24 10:06:53.275128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.015 [2024-04-24 10:06:53.275175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.015 [2024-04-24 10:06:53.275191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.015 [2024-04-24 10:06:53.275242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.015 [2024-04-24 10:06:53.275256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.274 #23 NEW cov: 11799 ft: 14276 corp: 19/1295b lim: 105 exec/s: 23 rss: 69Mb L: 71/90 MS: 1 ChangeBit- 00:11:40.274 [2024-04-24 10:06:53.315241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.274 [2024-04-24 10:06:53.315268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.274 [2024-04-24 10:06:53.315309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.274 [2024-04-24 10:06:53.315324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.274 [2024-04-24 10:06:53.315379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.274 [2024-04-24 10:06:53.315394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.274 #24 NEW cov: 11799 ft: 14305 corp: 20/1367b lim: 105 exec/s: 24 rss: 69Mb L: 72/90 MS: 1 InsertByte- 00:11:40.274 [2024-04-24 10:06:53.355261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.274 [2024-04-24 10:06:53.355289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.274 [2024-04-24 10:06:53.355356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.274 [2024-04-24 10:06:53.355371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.274 #25 NEW cov: 11799 ft: 14323 corp: 21/1415b lim: 105 exec/s: 25 rss: 70Mb L: 48/90 MS: 1 CMP- DE: "\015\000"- 00:11:40.274 [2024-04-24 10:06:53.395453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.274 [2024-04-24 10:06:53.395480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.274 [2024-04-24 10:06:53.395515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.274 [2024-04-24 10:06:53.395531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.274 [2024-04-24 10:06:53.395584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.274 [2024-04-24 10:06:53.395600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.274 #26 NEW cov: 11799 ft: 14365 corp: 22/1486b lim: 105 exec/s: 26 rss: 70Mb L: 71/90 MS: 1 ShuffleBytes- 00:11:40.274 [2024-04-24 10:06:53.435686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6872316418184970240 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.274 [2024-04-24 10:06:53.435713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.274 [2024-04-24 10:06:53.435762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1593835556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.275 [2024-04-24 10:06:53.435777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.275 [2024-04-24 10:06:53.435827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:23040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.275 [2024-04-24 10:06:53.435841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.275 [2024-04-24 10:06:53.435892] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:63488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.275 [2024-04-24 10:06:53.435910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:40.275 #27 NEW cov: 11799 ft: 14391 corp: 23/1576b lim: 105 exec/s: 27 rss: 70Mb L: 90/90 MS: 1 ChangeBinInt- 00:11:40.275 [2024-04-24 10:06:53.475662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.275 [2024-04-24 10:06:53.475688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.275 [2024-04-24 10:06:53.475736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.275 [2024-04-24 10:06:53.475751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.275 [2024-04-24 10:06:53.475803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.275 [2024-04-24 10:06:53.475818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.275 #28 NEW cov: 11799 ft: 14410 corp: 24/1654b lim: 105 exec/s: 28 rss: 70Mb L: 78/90 MS: 1 CrossOver- 00:11:40.275 [2024-04-24 10:06:53.515803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.275 [2024-04-24 10:06:53.515829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.275 [2024-04-24 10:06:53.515867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.275 [2024-04-24 10:06:53.515882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.275 [2024-04-24 10:06:53.515933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.275 [2024-04-24 10:06:53.515947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.275 #29 NEW cov: 11799 ft: 14439 corp: 25/1728b lim: 105 exec/s: 29 rss: 70Mb L: 74/90 MS: 1 PersAutoDict- DE: "\015\000"- 00:11:40.534 [2024-04-24 10:06:53.555824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.534 [2024-04-24 10:06:53.555851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.534 [2024-04-24 10:06:53.555899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.534 [2024-04-24 10:06:53.555916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.534 #30 NEW cov: 11799 ft: 14557 corp: 26/1774b lim: 105 exec/s: 30 rss: 70Mb L: 46/90 MS: 1 ChangeASCIIInt- 00:11:40.534 [2024-04-24 10:06:53.596040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.534 [2024-04-24 10:06:53.596070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.534 [2024-04-24 10:06:53.596117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.534 [2024-04-24 10:06:53.596133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.534 [2024-04-24 10:06:53.596188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.534 [2024-04-24 10:06:53.596207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.534 #31 NEW cov: 11799 ft: 14572 corp: 27/1846b lim: 105 exec/s: 31 rss: 70Mb L: 72/90 MS: 1 CopyPart- 00:11:40.534 [2024-04-24 10:06:53.626168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.534 [2024-04-24 10:06:53.626194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.534 [2024-04-24 10:06:53.626235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.534 [2024-04-24 10:06:53.626248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.535 [2024-04-24 10:06:53.626302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.626317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.535 #32 NEW cov: 11799 ft: 14613 corp: 28/1917b lim: 105 exec/s: 32 rss: 70Mb L: 71/90 MS: 1 ChangeBinInt- 00:11:40.535 [2024-04-24 10:06:53.666397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.666424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.535 [2024-04-24 10:06:53.666468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.666481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.535 [2024-04-24 10:06:53.666531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.666548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.535 [2024-04-24 10:06:53.666599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.666614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:40.535 #33 NEW cov: 11799 ft: 14627 corp: 29/2005b lim: 105 exec/s: 33 rss: 70Mb L: 88/90 MS: 1 InsertRepeatedBytes- 00:11:40.535 [2024-04-24 10:06:53.706447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.706474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.535 [2024-04-24 10:06:53.706523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.706538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.535 [2024-04-24 10:06:53.706590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:42663 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.706605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.535 [2024-04-24 10:06:53.706660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2795939494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.706678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:40.535 #34 NEW cov: 11799 ft: 14648 corp: 30/2109b lim: 105 exec/s: 34 rss: 70Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:11:40.535 [2024-04-24 10:06:53.746588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.746614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.535 [2024-04-24 10:06:53.746665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.746681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.535 [2024-04-24 10:06:53.746733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.746748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.535 [2024-04-24 10:06:53.746800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744069414584320 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.746814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:40.535 #35 NEW cov: 11799 ft: 14649 corp: 31/2212b lim: 105 exec/s: 35 rss: 70Mb L: 103/104 MS: 1 InsertRepeatedBytes- 00:11:40.535 [2024-04-24 10:06:53.786687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.786713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.535 [2024-04-24 10:06:53.786762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.786778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.535 [2024-04-24 10:06:53.786830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.786843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.535 [2024-04-24 10:06:53.786897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.535 [2024-04-24 10:06:53.786913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:40.795 #36 NEW cov: 11799 ft: 14680 corp: 32/2307b lim: 105 exec/s: 36 rss: 70Mb L: 95/104 MS: 1 CopyPart- 00:11:40.795 [2024-04-24 10:06:53.826981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.827008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.827065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.827080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.827134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.827148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.827202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.827217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.827286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:206158430208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.827300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:40.795 #37 NEW cov: 11799 ft: 14741 corp: 33/2412b lim: 105 exec/s: 37 rss: 70Mb L: 105/105 MS: 1 CrossOver- 00:11:40.795 [2024-04-24 10:06:53.866848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.866875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.866921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.866937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.866991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2359296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.867006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.795 #38 NEW cov: 11799 ft: 14746 corp: 34/2483b lim: 105 exec/s: 38 rss: 70Mb L: 71/105 MS: 1 CrossOver- 00:11:40.795 [2024-04-24 10:06:53.907158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.907188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.907252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.907268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.907322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.907337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.907390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.907406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.907459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:206158430208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.907472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:40.795 #39 NEW cov: 11799 ft: 14801 corp: 35/2588b lim: 105 exec/s: 39 rss: 71Mb L: 105/105 MS: 1 ChangeASCIIInt- 00:11:40.795 [2024-04-24 10:06:53.947172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.947199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.947247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.947264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.947317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.947331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.947387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.947402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:40.795 #40 NEW cov: 11799 ft: 14812 corp: 36/2673b lim: 105 exec/s: 40 rss: 71Mb L: 85/105 MS: 1 CopyPart- 00:11:40.795 [2024-04-24 10:06:53.987129] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:56002347008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.987156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.987220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.987236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:53.987290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:53.987305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:40.795 #41 NEW cov: 11799 ft: 14816 corp: 37/2747b lim: 105 exec/s: 41 rss: 71Mb L: 74/105 MS: 1 CMP- DE: "\015\000\000\000"- 00:11:40.795 [2024-04-24 10:06:54.027025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:54.027053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.795 #42 NEW cov: 11799 ft: 15297 corp: 38/2783b lim: 105 exec/s: 42 rss: 71Mb L: 36/105 MS: 1 EraseBytes- 00:11:40.795 [2024-04-24 10:06:54.067377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:54.067406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:54.067445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:54.067461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:40.795 [2024-04-24 10:06:54.067517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:64513 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:40.795 [2024-04-24 10:06:54.067531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:41.055 #43 NEW cov: 11799 ft: 15310 corp: 39/2854b lim: 105 exec/s: 43 rss: 71Mb L: 71/105 MS: 1 ShuffleBytes- 00:11:41.055 [2024-04-24 10:06:54.107483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.055 [2024-04-24 10:06:54.107511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:41.055 [2024-04-24 10:06:54.107548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:553648128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.055 [2024-04-24 10:06:54.107566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:41.055 [2024-04-24 10:06:54.107622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.055 [2024-04-24 10:06:54.107636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:41.055 #44 NEW cov: 11799 ft: 15325 corp: 40/2927b lim: 105 exec/s: 44 rss: 71Mb L: 73/105 MS: 1 ChangeBinInt- 00:11:41.055 [2024-04-24 10:06:54.147699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.055 [2024-04-24 10:06:54.147726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:41.055 [2024-04-24 10:06:54.147772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.055 [2024-04-24 10:06:54.147788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:41.055 [2024-04-24 10:06:54.147843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:42663 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.055 [2024-04-24 10:06:54.147857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:41.056 [2024-04-24 10:06:54.147910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2795939494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.056 [2024-04-24 10:06:54.147924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:41.056 #45 NEW cov: 11799 ft: 15334 corp: 41/3031b lim: 105 exec/s: 45 rss: 71Mb L: 104/105 MS: 1 ChangeBinInt- 00:11:41.056 [2024-04-24 10:06:54.187578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.056 [2024-04-24 10:06:54.187605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:41.056 [2024-04-24 10:06:54.187644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.056 [2024-04-24 10:06:54.187660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:41.056 [2024-04-24 10:06:54.227746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:14 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.056 [2024-04-24 10:06:54.227773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:41.056 [2024-04-24 10:06:54.227823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.056 [2024-04-24 10:06:54.227839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:41.056 #47 NEW cov: 11799 ft: 15346 corp: 42/3079b lim: 105 exec/s: 47 rss: 72Mb L: 48/105 MS: 2 ShuffleBytes-PersAutoDict- DE: "\015\000"- 00:11:41.056 [2024-04-24 10:06:54.268070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:39582586372096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.056 [2024-04-24 10:06:54.268097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:41.056 [2024-04-24 10:06:54.268144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.056 [2024-04-24 10:06:54.268159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:41.056 [2024-04-24 10:06:54.268214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.056 [2024-04-24 10:06:54.268229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:41.056 [2024-04-24 10:06:54.268282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:41.056 [2024-04-24 10:06:54.268297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:41.056 #48 NEW cov: 11799 ft: 15356 corp: 43/3167b lim: 105 exec/s: 24 rss: 72Mb L: 88/105 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\003"- 00:11:41.056 #48 DONE cov: 11799 ft: 15356 corp: 43/3167b lim: 105 exec/s: 24 rss: 72Mb 00:11:41.056 ###### Recommended dictionary. ###### 00:11:41.056 "\015\000" # Uses: 2 00:11:41.056 "\015\000\000\000" # Uses: 0 00:11:41.056 "\001\000\000\000\000\000\000\003" # Uses: 0 00:11:41.056 ###### End of recommended dictionary. ###### 00:11:41.056 Done 48 runs in 2 second(s) 00:11:41.316 10:06:54 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_16.conf 00:11:41.316 10:06:54 -- ../common.sh@72 -- # (( i++ )) 00:11:41.316 10:06:54 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:41.316 10:06:54 -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:11:41.316 10:06:54 -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:11:41.316 10:06:54 -- nvmf/run.sh@24 -- # local timen=1 00:11:41.316 10:06:54 -- nvmf/run.sh@25 -- # local core=0x1 00:11:41.316 10:06:54 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:11:41.316 10:06:54 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:11:41.316 10:06:54 -- nvmf/run.sh@29 -- # printf %02d 17 00:11:41.316 10:06:54 -- nvmf/run.sh@29 -- # port=4417 00:11:41.316 10:06:54 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:11:41.316 10:06:54 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:11:41.316 10:06:54 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:41.316 10:06:54 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 -r /var/tmp/spdk17.sock 00:11:41.316 [2024-04-24 10:06:54.458118] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:41.316 [2024-04-24 10:06:54.458199] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174092 ] 00:11:41.316 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.576 [2024-04-24 10:06:54.773306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.834 [2024-04-24 10:06:54.866281] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:41.834 [2024-04-24 10:06:54.866428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.834 [2024-04-24 10:06:54.924924] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.834 [2024-04-24 10:06:54.941124] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:11:41.834 INFO: Running with entropic power schedule (0xFF, 100). 00:11:41.834 INFO: Seed: 3651977742 00:11:41.834 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:41.835 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:41.835 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:11:41.835 INFO: A corpus is not provided, starting from an empty corpus 00:11:41.835 #2 INITED exec/s: 0 rss: 60Mb 00:11:41.835 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:41.835 This may also happen if the target rejected all inputs we tried so far 00:11:41.835 [2024-04-24 10:06:54.986655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:41.835 [2024-04-24 10:06:54.986686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:41.835 [2024-04-24 10:06:54.986746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:41.835 [2024-04-24 10:06:54.986762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:41.835 [2024-04-24 10:06:54.986828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:41.835 [2024-04-24 10:06:54.986844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.093 NEW_FUNC[1/665]: 0x499910 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:11:42.093 NEW_FUNC[2/665]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:42.093 #10 NEW cov: 11593 ft: 11594 corp: 2/74b lim: 120 exec/s: 0 rss: 68Mb L: 73/73 MS: 3 ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:11:42.093 [2024-04-24 10:06:55.317359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.093 [2024-04-24 10:06:55.317402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.093 [2024-04-24 10:06:55.317454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.093 [2024-04-24 10:06:55.317469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.093 [2024-04-24 10:06:55.317521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.093 [2024-04-24 10:06:55.317536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.093 #16 NEW cov: 11706 ft: 12162 corp: 3/147b lim: 120 exec/s: 0 rss: 68Mb L: 73/73 MS: 1 ShuffleBytes- 00:11:42.093 [2024-04-24 10:06:55.367440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.093 [2024-04-24 10:06:55.367469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.093 [2024-04-24 10:06:55.367506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.093 [2024-04-24 10:06:55.367522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.093 [2024-04-24 10:06:55.367575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.093 [2024-04-24 10:06:55.367591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.352 #17 NEW cov: 11712 ft: 12428 corp: 4/220b lim: 120 exec/s: 0 rss: 68Mb L: 73/73 MS: 1 CrossOver- 00:11:42.352 [2024-04-24 10:06:55.407506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.407532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.352 [2024-04-24 10:06:55.407586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.407605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.352 [2024-04-24 10:06:55.407656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.407671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.352 #18 NEW cov: 11797 ft: 12628 corp: 5/293b lim: 120 exec/s: 0 rss: 68Mb L: 73/73 MS: 1 ChangeBit- 00:11:42.352 [2024-04-24 10:06:55.447598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.447625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.352 [2024-04-24 10:06:55.447661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.447677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.352 [2024-04-24 10:06:55.447729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.447743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.352 #19 NEW cov: 11797 ft: 12699 corp: 6/367b lim: 120 exec/s: 0 rss: 68Mb L: 74/74 MS: 1 InsertByte- 00:11:42.352 [2024-04-24 10:06:55.487742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.487770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.352 [2024-04-24 10:06:55.487821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.487837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.352 [2024-04-24 10:06:55.487889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.487904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.352 #20 NEW cov: 11797 ft: 12756 corp: 7/441b lim: 120 exec/s: 0 rss: 68Mb L: 74/74 MS: 1 InsertByte- 00:11:42.352 [2024-04-24 10:06:55.527871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.527899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.352 [2024-04-24 10:06:55.527935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.527951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.352 [2024-04-24 10:06:55.528002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.528018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.352 #21 NEW cov: 11797 ft: 12786 corp: 8/514b lim: 120 exec/s: 0 rss: 68Mb L: 73/74 MS: 1 CopyPart- 00:11:42.352 [2024-04-24 10:06:55.568097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.568125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.352 [2024-04-24 10:06:55.568166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.568181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.352 [2024-04-24 10:06:55.568246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.568261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.352 [2024-04-24 10:06:55.568312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.568326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:42.352 #22 NEW cov: 11797 ft: 13182 corp: 9/623b lim: 120 exec/s: 0 rss: 68Mb L: 109/109 MS: 1 CrossOver- 00:11:42.352 [2024-04-24 10:06:55.608182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.608212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.352 [2024-04-24 10:06:55.608252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.352 [2024-04-24 10:06:55.608268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.353 [2024-04-24 10:06:55.608323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.353 [2024-04-24 10:06:55.608340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.353 #23 NEW cov: 11797 ft: 13265 corp: 10/696b lim: 120 exec/s: 0 rss: 68Mb L: 73/109 MS: 1 ChangeByte- 00:11:42.611 [2024-04-24 10:06:55.648328] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.611 [2024-04-24 10:06:55.648356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.611 [2024-04-24 10:06:55.648392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.648408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.648458] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.648473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.648522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.648537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:42.612 #24 NEW cov: 11797 ft: 13320 corp: 11/805b lim: 120 exec/s: 0 rss: 68Mb L: 109/109 MS: 1 ChangeBinInt- 00:11:42.612 [2024-04-24 10:06:55.688295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.688322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.688360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.688379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.688431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.688447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.612 #25 NEW cov: 11797 ft: 13363 corp: 12/878b lim: 120 exec/s: 0 rss: 68Mb L: 73/109 MS: 1 ShuffleBytes- 00:11:42.612 [2024-04-24 10:06:55.718349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.718376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.718406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.718421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.718471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.718486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.612 #26 NEW cov: 11797 ft: 13414 corp: 13/952b lim: 120 exec/s: 0 rss: 68Mb L: 74/109 MS: 1 CrossOver- 00:11:42.612 [2024-04-24 10:06:55.758645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.758672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.758714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.758729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.758779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.758794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.758846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.758861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:42.612 #27 NEW cov: 11797 ft: 13429 corp: 14/1065b lim: 120 exec/s: 0 rss: 69Mb L: 113/113 MS: 1 CrossOver- 00:11:42.612 [2024-04-24 10:06:55.808634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16843009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.808661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.808697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.808711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.808762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.808777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.612 #31 NEW cov: 11797 ft: 13455 corp: 15/1153b lim: 120 exec/s: 0 rss: 69Mb L: 88/113 MS: 4 InsertRepeatedBytes-ChangeByte-ChangeBit-CrossOver- 00:11:42.612 [2024-04-24 10:06:55.848936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.848963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.849004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.849019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.849073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.849105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.849156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.849171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:42.612 #32 NEW cov: 11797 ft: 13506 corp: 16/1260b lim: 120 exec/s: 0 rss: 69Mb L: 107/113 MS: 1 CopyPart- 00:11:42.612 [2024-04-24 10:06:55.888899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.888926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.888962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.888978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.612 [2024-04-24 10:06:55.889029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:313532612608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.612 [2024-04-24 10:06:55.889045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.871 #33 NEW cov: 11797 ft: 13531 corp: 17/1333b lim: 120 exec/s: 0 rss: 69Mb L: 73/113 MS: 1 ChangeBinInt- 00:11:42.872 [2024-04-24 10:06:55.919136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:55.919170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:55.919206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:55.919221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:55.919271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:55.919286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:55.919336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:55.919351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:42.872 #34 NEW cov: 11797 ft: 13594 corp: 18/1440b lim: 120 exec/s: 0 rss: 69Mb L: 107/113 MS: 1 CrossOver- 00:11:42.872 [2024-04-24 10:06:55.959214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:55.959242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:55.959284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:55.959299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:55.959335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:55.959351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:55.959403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:55.959418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:42.872 #35 NEW cov: 11797 ft: 13625 corp: 19/1553b lim: 120 exec/s: 35 rss: 69Mb L: 113/113 MS: 1 CopyPart- 00:11:42.872 [2024-04-24 10:06:55.999362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:55.999388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:55.999427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:55.999440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:55.999491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:55.999506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:55.999559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:55.999574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:42.872 #36 NEW cov: 11797 ft: 13640 corp: 20/1666b lim: 120 exec/s: 36 rss: 69Mb L: 113/113 MS: 1 ChangeBinInt- 00:11:42.872 [2024-04-24 10:06:56.039306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:56.039332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:56.039369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:56.039385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:56.039433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:313532612608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:56.039449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.872 #37 NEW cov: 11797 ft: 13692 corp: 21/1739b lim: 120 exec/s: 37 rss: 69Mb L: 73/113 MS: 1 ChangeBinInt- 00:11:42.872 [2024-04-24 10:06:56.079258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:56.079285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:56.079341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:56.079357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.872 #38 NEW cov: 11797 ft: 14058 corp: 22/1797b lim: 120 exec/s: 38 rss: 69Mb L: 58/113 MS: 1 EraseBytes- 00:11:42.872 [2024-04-24 10:06:56.119522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:56.119549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:56.119592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:56.119608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:42.872 [2024-04-24 10:06:56.119659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:42.872 [2024-04-24 10:06:56.119675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:42.872 #43 NEW cov: 11797 ft: 14064 corp: 23/1882b lim: 120 exec/s: 43 rss: 69Mb L: 85/113 MS: 5 CopyPart-ChangeBit-ChangeByte-ChangeASCIIInt-InsertRepeatedBytes- 00:11:43.131 [2024-04-24 10:06:56.149729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.131 [2024-04-24 10:06:56.149757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.131 [2024-04-24 10:06:56.149802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.131 [2024-04-24 10:06:56.149817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.131 [2024-04-24 10:06:56.149868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.131 [2024-04-24 10:06:56.149884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.131 [2024-04-24 10:06:56.149934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.131 [2024-04-24 10:06:56.149949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:43.131 #44 NEW cov: 11797 ft: 14151 corp: 24/1995b lim: 120 exec/s: 44 rss: 69Mb L: 113/113 MS: 1 ShuffleBytes- 00:11:43.131 [2024-04-24 10:06:56.189754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.131 [2024-04-24 10:06:56.189782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.131 [2024-04-24 10:06:56.189817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1381105664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.131 [2024-04-24 10:06:56.189833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.131 [2024-04-24 10:06:56.189883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.131 [2024-04-24 10:06:56.189900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.131 #45 NEW cov: 11797 ft: 14182 corp: 25/2074b lim: 120 exec/s: 45 rss: 69Mb L: 79/113 MS: 1 InsertRepeatedBytes- 00:11:43.131 [2024-04-24 10:06:56.229845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.229874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.132 [2024-04-24 10:06:56.229916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.229932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.132 [2024-04-24 10:06:56.229982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.229997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.132 #46 NEW cov: 11797 ft: 14197 corp: 26/2147b lim: 120 exec/s: 46 rss: 69Mb L: 73/113 MS: 1 ChangeByte- 00:11:43.132 [2024-04-24 10:06:56.269974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16843009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.270000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.132 [2024-04-24 10:06:56.270037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4143972352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.270052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.132 [2024-04-24 10:06:56.270106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.270121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.132 #47 NEW cov: 11797 ft: 14241 corp: 27/2235b lim: 120 exec/s: 47 rss: 69Mb L: 88/113 MS: 1 ChangeBinInt- 00:11:43.132 [2024-04-24 10:06:56.310220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.310246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.132 [2024-04-24 10:06:56.310292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.310307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.132 [2024-04-24 10:06:56.310358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.310373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.132 [2024-04-24 10:06:56.310424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.310438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:43.132 #48 NEW cov: 11797 ft: 14261 corp: 28/2342b lim: 120 exec/s: 48 rss: 69Mb L: 107/113 MS: 1 ChangeBit- 00:11:43.132 [2024-04-24 10:06:56.350370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16843009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.350398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.132 [2024-04-24 10:06:56.350438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.350454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.132 [2024-04-24 10:06:56.350503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.350517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.132 [2024-04-24 10:06:56.350569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.350585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:43.132 #49 NEW cov: 11797 ft: 14281 corp: 29/2448b lim: 120 exec/s: 49 rss: 69Mb L: 106/113 MS: 1 CrossOver- 00:11:43.132 [2024-04-24 10:06:56.390450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.390476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.132 [2024-04-24 10:06:56.390524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.390540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.132 [2024-04-24 10:06:56.390590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.390605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.132 [2024-04-24 10:06:56.390655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.132 [2024-04-24 10:06:56.390670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:43.392 #50 NEW cov: 11797 ft: 14304 corp: 30/2561b lim: 120 exec/s: 50 rss: 69Mb L: 113/113 MS: 1 ChangeBinInt- 00:11:43.392 [2024-04-24 10:06:56.430416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:131072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.430443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.430477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5931803660581491282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.430493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.430545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.430560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.392 #51 NEW cov: 11797 ft: 14309 corp: 31/2644b lim: 120 exec/s: 51 rss: 69Mb L: 83/113 MS: 1 CMP- DE: "\002\000\000\000"- 00:11:43.392 [2024-04-24 10:06:56.470569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.470595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.470632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.470650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.470700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.470715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.392 #52 NEW cov: 11797 ft: 14321 corp: 32/2719b lim: 120 exec/s: 52 rss: 69Mb L: 75/113 MS: 1 InsertByte- 00:11:43.392 [2024-04-24 10:06:56.510660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:131072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.510687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.510728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1381105664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.510742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.510793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.510808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.392 #53 NEW cov: 11797 ft: 14338 corp: 33/2798b lim: 120 exec/s: 53 rss: 70Mb L: 79/113 MS: 1 PersAutoDict- DE: "\002\000\000\000"- 00:11:43.392 [2024-04-24 10:06:56.550776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.550803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.550839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1099511627522 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.550854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.550906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.550921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.392 #54 NEW cov: 11797 ft: 14342 corp: 34/2883b lim: 120 exec/s: 54 rss: 70Mb L: 85/113 MS: 1 PersAutoDict- DE: "\002\000\000\000"- 00:11:43.392 [2024-04-24 10:06:56.590925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2199023255552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.590952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.590988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.591005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.591054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.591075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.392 #55 NEW cov: 11797 ft: 14349 corp: 35/2956b lim: 120 exec/s: 55 rss: 70Mb L: 73/113 MS: 1 PersAutoDict- DE: "\002\000\000\000"- 00:11:43.392 [2024-04-24 10:06:56.621301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.621333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.621371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4294901760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.621387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.621435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2251799813685248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.621449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.621500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.621515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.621567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.621582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:43.392 #56 NEW cov: 11797 ft: 14399 corp: 36/3076b lim: 120 exec/s: 56 rss: 70Mb L: 120/120 MS: 1 CrossOver- 00:11:43.392 [2024-04-24 10:06:56.661285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.661313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.661355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.661371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.661420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.661438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.392 [2024-04-24 10:06:56.661486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.392 [2024-04-24 10:06:56.661503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:43.651 #57 NEW cov: 11797 ft: 14400 corp: 37/3194b lim: 120 exec/s: 57 rss: 70Mb L: 118/120 MS: 1 InsertRepeatedBytes- 00:11:43.651 [2024-04-24 10:06:56.701398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.701425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.651 [2024-04-24 10:06:56.701465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.701481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.651 [2024-04-24 10:06:56.701531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.701547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.651 [2024-04-24 10:06:56.701598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.701616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:43.651 #58 NEW cov: 11797 ft: 14408 corp: 38/3313b lim: 120 exec/s: 58 rss: 70Mb L: 119/120 MS: 1 InsertByte- 00:11:43.651 [2024-04-24 10:06:56.741357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.741384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.651 [2024-04-24 10:06:56.741419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1099511627522 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.741435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.651 [2024-04-24 10:06:56.741486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:72057589742960640 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.741502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.651 #59 NEW cov: 11797 ft: 14428 corp: 39/3406b lim: 120 exec/s: 59 rss: 70Mb L: 93/120 MS: 1 CrossOver- 00:11:43.651 [2024-04-24 10:06:56.781600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.781629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.651 [2024-04-24 10:06:56.781665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.781680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.651 [2024-04-24 10:06:56.781729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.781744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.651 [2024-04-24 10:06:56.781797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.781810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:43.651 #60 NEW cov: 11797 ft: 14434 corp: 40/3515b lim: 120 exec/s: 60 rss: 70Mb L: 109/120 MS: 1 ShuffleBytes- 00:11:43.651 [2024-04-24 10:06:56.821313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.821340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.651 #61 NEW cov: 11797 ft: 15270 corp: 41/3544b lim: 120 exec/s: 61 rss: 70Mb L: 29/120 MS: 1 EraseBytes- 00:11:43.651 [2024-04-24 10:06:56.871702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:16843009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.871730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.651 [2024-04-24 10:06:56.871765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.871789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.651 [2024-04-24 10:06:56.871839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:71776119061217280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.871857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.651 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:43.651 #62 NEW cov: 11820 ft: 15325 corp: 42/3633b lim: 120 exec/s: 62 rss: 70Mb L: 89/120 MS: 1 InsertByte- 00:11:43.651 [2024-04-24 10:06:56.911540] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.651 [2024-04-24 10:06:56.911567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.911 #63 NEW cov: 11820 ft: 15392 corp: 43/3662b lim: 120 exec/s: 63 rss: 70Mb L: 29/120 MS: 1 ChangeByte- 00:11:43.911 [2024-04-24 10:06:56.962106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.911 [2024-04-24 10:06:56.962134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:43.911 [2024-04-24 10:06:56.962170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.911 [2024-04-24 10:06:56.962186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:43.911 [2024-04-24 10:06:56.962237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.911 [2024-04-24 10:06:56.962252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:43.911 [2024-04-24 10:06:56.962304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:43.911 [2024-04-24 10:06:56.962319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:43.911 #64 pulse cov: 11820 ft: 15401 corp: 43/3662b lim: 120 exec/s: 32 rss: 70Mb 00:11:43.911 #64 NEW cov: 11820 ft: 15401 corp: 44/3780b lim: 120 exec/s: 32 rss: 70Mb L: 118/120 MS: 1 ChangeBit- 00:11:43.911 #64 DONE cov: 11820 ft: 15401 corp: 44/3780b lim: 120 exec/s: 32 rss: 70Mb 00:11:43.911 ###### Recommended dictionary. ###### 00:11:43.911 "\002\000\000\000" # Uses: 3 00:11:43.911 ###### End of recommended dictionary. ###### 00:11:43.911 Done 64 runs in 2 second(s) 00:11:43.911 10:06:57 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_17.conf 00:11:43.911 10:06:57 -- ../common.sh@72 -- # (( i++ )) 00:11:43.911 10:06:57 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:43.911 10:06:57 -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:11:43.911 10:06:57 -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:11:43.911 10:06:57 -- nvmf/run.sh@24 -- # local timen=1 00:11:43.911 10:06:57 -- nvmf/run.sh@25 -- # local core=0x1 00:11:43.911 10:06:57 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:11:43.911 10:06:57 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:11:43.911 10:06:57 -- nvmf/run.sh@29 -- # printf %02d 18 00:11:43.911 10:06:57 -- nvmf/run.sh@29 -- # port=4418 00:11:43.911 10:06:57 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:11:43.911 10:06:57 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:11:43.911 10:06:57 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:43.911 10:06:57 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 -r /var/tmp/spdk18.sock 00:11:43.911 [2024-04-24 10:06:57.160376] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:43.911 [2024-04-24 10:06:57.160453] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174459 ] 00:11:44.170 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.429 [2024-04-24 10:06:57.470808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.429 [2024-04-24 10:06:57.563154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:44.429 [2024-04-24 10:06:57.563278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.429 [2024-04-24 10:06:57.621666] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.429 [2024-04-24 10:06:57.637864] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:11:44.429 INFO: Running with entropic power schedule (0xFF, 100). 00:11:44.429 INFO: Seed: 2055005023 00:11:44.429 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:44.429 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:44.429 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:11:44.429 INFO: A corpus is not provided, starting from an empty corpus 00:11:44.429 #2 INITED exec/s: 0 rss: 61Mb 00:11:44.429 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:44.429 This may also happen if the target rejected all inputs we tried so far 00:11:44.429 [2024-04-24 10:06:57.683355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:44.429 [2024-04-24 10:06:57.683388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:44.429 [2024-04-24 10:06:57.683432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:44.429 [2024-04-24 10:06:57.683447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:44.429 [2024-04-24 10:06:57.683500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:44.429 [2024-04-24 10:06:57.683514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:44.429 [2024-04-24 10:06:57.683567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:44.429 [2024-04-24 10:06:57.683582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:44.946 NEW_FUNC[1/663]: 0x49d170 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:11:44.946 NEW_FUNC[2/663]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:44.946 #13 NEW cov: 11537 ft: 11538 corp: 2/90b lim: 100 exec/s: 0 rss: 68Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:11:44.946 [2024-04-24 10:06:58.014160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:44.946 [2024-04-24 10:06:58.014200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.014251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:44.946 [2024-04-24 10:06:58.014265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.014319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:44.946 [2024-04-24 10:06:58.014333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.014385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:44.946 [2024-04-24 10:06:58.014405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:44.946 #14 NEW cov: 11650 ft: 11931 corp: 3/182b lim: 100 exec/s: 0 rss: 68Mb L: 92/92 MS: 1 InsertRepeatedBytes- 00:11:44.946 [2024-04-24 10:06:58.054212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:44.946 [2024-04-24 10:06:58.054239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.054274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:44.946 [2024-04-24 10:06:58.054288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.054339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:44.946 [2024-04-24 10:06:58.054353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.054403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:44.946 [2024-04-24 10:06:58.054417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:44.946 #15 NEW cov: 11656 ft: 12327 corp: 4/271b lim: 100 exec/s: 0 rss: 68Mb L: 89/92 MS: 1 ChangeBit- 00:11:44.946 [2024-04-24 10:06:58.094326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:44.946 [2024-04-24 10:06:58.094352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.094417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:44.946 [2024-04-24 10:06:58.094432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.094484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:44.946 [2024-04-24 10:06:58.094498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.094551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:44.946 [2024-04-24 10:06:58.094565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:44.946 #21 NEW cov: 11741 ft: 12550 corp: 5/360b lim: 100 exec/s: 0 rss: 68Mb L: 89/92 MS: 1 CrossOver- 00:11:44.946 [2024-04-24 10:06:58.134334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:44.946 [2024-04-24 10:06:58.134361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.134394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:44.946 [2024-04-24 10:06:58.134408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.134462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:44.946 [2024-04-24 10:06:58.134475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:44.946 #22 NEW cov: 11741 ft: 12957 corp: 6/425b lim: 100 exec/s: 0 rss: 68Mb L: 65/92 MS: 1 EraseBytes- 00:11:44.946 [2024-04-24 10:06:58.174519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:44.946 [2024-04-24 10:06:58.174545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.174590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:44.946 [2024-04-24 10:06:58.174607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.174658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:44.946 [2024-04-24 10:06:58.174672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.174725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:44.946 [2024-04-24 10:06:58.174738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:44.946 #23 NEW cov: 11741 ft: 13003 corp: 7/514b lim: 100 exec/s: 0 rss: 68Mb L: 89/92 MS: 1 ChangeBit- 00:11:44.946 [2024-04-24 10:06:58.214658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:44.946 [2024-04-24 10:06:58.214684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.214731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:44.946 [2024-04-24 10:06:58.214745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.214797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:44.946 [2024-04-24 10:06:58.214811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:44.946 [2024-04-24 10:06:58.214863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:44.946 [2024-04-24 10:06:58.214876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:45.206 #24 NEW cov: 11741 ft: 13113 corp: 8/610b lim: 100 exec/s: 0 rss: 68Mb L: 96/96 MS: 1 CrossOver- 00:11:45.206 [2024-04-24 10:06:58.254750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.206 [2024-04-24 10:06:58.254777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.254823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.206 [2024-04-24 10:06:58.254837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.254886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.206 [2024-04-24 10:06:58.254900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.254952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:45.206 [2024-04-24 10:06:58.254965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:45.206 #25 NEW cov: 11741 ft: 13125 corp: 9/701b lim: 100 exec/s: 0 rss: 68Mb L: 91/96 MS: 1 CopyPart- 00:11:45.206 [2024-04-24 10:06:58.294762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.206 [2024-04-24 10:06:58.294788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.294826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.206 [2024-04-24 10:06:58.294841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.294908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.206 [2024-04-24 10:06:58.294925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.206 #26 NEW cov: 11741 ft: 13194 corp: 10/766b lim: 100 exec/s: 0 rss: 68Mb L: 65/96 MS: 1 CopyPart- 00:11:45.206 [2024-04-24 10:06:58.335014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.206 [2024-04-24 10:06:58.335040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.335087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.206 [2024-04-24 10:06:58.335102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.335157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.206 [2024-04-24 10:06:58.335171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.335225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:45.206 [2024-04-24 10:06:58.335238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:45.206 #27 NEW cov: 11741 ft: 13259 corp: 11/863b lim: 100 exec/s: 0 rss: 68Mb L: 97/97 MS: 1 InsertRepeatedBytes- 00:11:45.206 [2024-04-24 10:06:58.374864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.206 [2024-04-24 10:06:58.374890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.374938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.206 [2024-04-24 10:06:58.374953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.206 #28 NEW cov: 11741 ft: 13580 corp: 12/911b lim: 100 exec/s: 0 rss: 68Mb L: 48/97 MS: 1 EraseBytes- 00:11:45.206 [2024-04-24 10:06:58.415089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.206 [2024-04-24 10:06:58.415115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.415176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.206 [2024-04-24 10:06:58.415187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.415240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.206 [2024-04-24 10:06:58.415254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.206 #29 NEW cov: 11741 ft: 13608 corp: 13/975b lim: 100 exec/s: 0 rss: 68Mb L: 64/97 MS: 1 EraseBytes- 00:11:45.206 [2024-04-24 10:06:58.455312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.206 [2024-04-24 10:06:58.455337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.455388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.206 [2024-04-24 10:06:58.455403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.455456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.206 [2024-04-24 10:06:58.455470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.206 [2024-04-24 10:06:58.455525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:45.206 [2024-04-24 10:06:58.455538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:45.206 #30 NEW cov: 11741 ft: 13621 corp: 14/1072b lim: 100 exec/s: 0 rss: 68Mb L: 97/97 MS: 1 CopyPart- 00:11:45.465 [2024-04-24 10:06:58.495481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.465 [2024-04-24 10:06:58.495523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.495559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.465 [2024-04-24 10:06:58.495573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.495626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.465 [2024-04-24 10:06:58.495641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.495695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:45.465 [2024-04-24 10:06:58.495708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:45.465 #31 NEW cov: 11741 ft: 13631 corp: 15/1164b lim: 100 exec/s: 0 rss: 68Mb L: 92/97 MS: 1 CopyPart- 00:11:45.465 [2024-04-24 10:06:58.535535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.465 [2024-04-24 10:06:58.535562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.535624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.465 [2024-04-24 10:06:58.535638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.535691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.465 [2024-04-24 10:06:58.535706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.535760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:45.465 [2024-04-24 10:06:58.535775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:45.465 #32 NEW cov: 11741 ft: 13735 corp: 16/1256b lim: 100 exec/s: 0 rss: 69Mb L: 92/97 MS: 1 CrossOver- 00:11:45.465 [2024-04-24 10:06:58.575480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.465 [2024-04-24 10:06:58.575505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.575550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.465 [2024-04-24 10:06:58.575565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.465 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:45.465 #33 NEW cov: 11764 ft: 13809 corp: 17/1314b lim: 100 exec/s: 0 rss: 69Mb L: 58/97 MS: 1 EraseBytes- 00:11:45.465 [2024-04-24 10:06:58.625832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.465 [2024-04-24 10:06:58.625856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.625903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.465 [2024-04-24 10:06:58.625920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.625949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.465 [2024-04-24 10:06:58.625962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.626015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:45.465 [2024-04-24 10:06:58.626029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:45.465 #34 NEW cov: 11764 ft: 13857 corp: 18/1406b lim: 100 exec/s: 0 rss: 69Mb L: 92/97 MS: 1 CopyPart- 00:11:45.465 [2024-04-24 10:06:58.665945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.465 [2024-04-24 10:06:58.665970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.666020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.465 [2024-04-24 10:06:58.666034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.666080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.465 [2024-04-24 10:06:58.666094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.666146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:45.465 [2024-04-24 10:06:58.666160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:45.465 #35 NEW cov: 11764 ft: 13875 corp: 19/1497b lim: 100 exec/s: 35 rss: 69Mb L: 91/97 MS: 1 CopyPart- 00:11:45.465 [2024-04-24 10:06:58.705929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.465 [2024-04-24 10:06:58.705953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.705995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.465 [2024-04-24 10:06:58.706010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.465 [2024-04-24 10:06:58.706066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.465 [2024-04-24 10:06:58.706080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.465 #36 NEW cov: 11764 ft: 13886 corp: 20/1561b lim: 100 exec/s: 36 rss: 69Mb L: 64/97 MS: 1 ChangeBit- 00:11:45.724 [2024-04-24 10:06:58.746237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.724 [2024-04-24 10:06:58.746262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.724 [2024-04-24 10:06:58.746303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.724 [2024-04-24 10:06:58.746318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.724 [2024-04-24 10:06:58.746353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.724 [2024-04-24 10:06:58.746367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.724 [2024-04-24 10:06:58.746417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:45.724 [2024-04-24 10:06:58.746432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:45.724 #37 NEW cov: 11764 ft: 13898 corp: 21/1658b lim: 100 exec/s: 37 rss: 69Mb L: 97/97 MS: 1 ChangeBinInt- 00:11:45.724 [2024-04-24 10:06:58.786318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.724 [2024-04-24 10:06:58.786344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.724 [2024-04-24 10:06:58.786390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.724 [2024-04-24 10:06:58.786405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.724 [2024-04-24 10:06:58.786457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.724 [2024-04-24 10:06:58.786468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.724 [2024-04-24 10:06:58.786521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:45.724 [2024-04-24 10:06:58.786534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:45.724 #38 NEW cov: 11764 ft: 13955 corp: 22/1755b lim: 100 exec/s: 38 rss: 69Mb L: 97/97 MS: 1 ChangeBit- 00:11:45.724 [2024-04-24 10:06:58.826161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.724 [2024-04-24 10:06:58.826186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.725 [2024-04-24 10:06:58.826232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.725 [2024-04-24 10:06:58.826245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.725 #39 NEW cov: 11764 ft: 13979 corp: 23/1803b lim: 100 exec/s: 39 rss: 69Mb L: 48/97 MS: 1 ChangeBinInt- 00:11:45.725 [2024-04-24 10:06:58.866397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.725 [2024-04-24 10:06:58.866421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.725 [2024-04-24 10:06:58.866456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.725 [2024-04-24 10:06:58.866470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.725 [2024-04-24 10:06:58.866524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.725 [2024-04-24 10:06:58.866538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.725 #40 NEW cov: 11764 ft: 13992 corp: 24/1864b lim: 100 exec/s: 40 rss: 69Mb L: 61/97 MS: 1 EraseBytes- 00:11:45.725 [2024-04-24 10:06:58.906490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.725 [2024-04-24 10:06:58.906515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.725 [2024-04-24 10:06:58.906558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.725 [2024-04-24 10:06:58.906573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.725 [2024-04-24 10:06:58.906626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.725 [2024-04-24 10:06:58.906639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.725 #41 NEW cov: 11764 ft: 14009 corp: 25/1927b lim: 100 exec/s: 41 rss: 70Mb L: 63/97 MS: 1 CopyPart- 00:11:45.725 [2024-04-24 10:06:58.946769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.725 [2024-04-24 10:06:58.946794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.725 [2024-04-24 10:06:58.946852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.725 [2024-04-24 10:06:58.946868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.725 [2024-04-24 10:06:58.946922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.725 [2024-04-24 10:06:58.946936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.725 [2024-04-24 10:06:58.946991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:45.725 [2024-04-24 10:06:58.947005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:45.725 #42 NEW cov: 11764 ft: 14018 corp: 26/2014b lim: 100 exec/s: 42 rss: 70Mb L: 87/97 MS: 1 InsertRepeatedBytes- 00:11:45.725 [2024-04-24 10:06:58.986767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.725 [2024-04-24 10:06:58.986792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.725 [2024-04-24 10:06:58.986840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.725 [2024-04-24 10:06:58.986854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.725 [2024-04-24 10:06:58.986906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.725 [2024-04-24 10:06:58.986920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.984 #43 NEW cov: 11764 ft: 14024 corp: 27/2078b lim: 100 exec/s: 43 rss: 70Mb L: 64/97 MS: 1 ChangeBinInt- 00:11:45.984 [2024-04-24 10:06:59.026848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.984 [2024-04-24 10:06:59.026874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.026911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.984 [2024-04-24 10:06:59.026924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.026978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.984 [2024-04-24 10:06:59.026992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.984 #44 NEW cov: 11764 ft: 14047 corp: 28/2139b lim: 100 exec/s: 44 rss: 70Mb L: 61/97 MS: 1 ChangeByte- 00:11:45.984 [2024-04-24 10:06:59.067119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.984 [2024-04-24 10:06:59.067144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.067195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.984 [2024-04-24 10:06:59.067209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.067261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.984 [2024-04-24 10:06:59.067275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.067328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:45.984 [2024-04-24 10:06:59.067344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:45.984 #45 NEW cov: 11764 ft: 14058 corp: 29/2230b lim: 100 exec/s: 45 rss: 70Mb L: 91/97 MS: 1 ShuffleBytes- 00:11:45.984 [2024-04-24 10:06:59.107099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.984 [2024-04-24 10:06:59.107124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.107160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.984 [2024-04-24 10:06:59.107175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.107243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.984 [2024-04-24 10:06:59.107257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.984 #46 NEW cov: 11764 ft: 14105 corp: 30/2291b lim: 100 exec/s: 46 rss: 70Mb L: 61/97 MS: 1 ChangeBinInt- 00:11:45.984 [2024-04-24 10:06:59.147232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.984 [2024-04-24 10:06:59.147258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.147318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.984 [2024-04-24 10:06:59.147334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.147389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.984 [2024-04-24 10:06:59.147403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.984 #47 NEW cov: 11764 ft: 14130 corp: 31/2355b lim: 100 exec/s: 47 rss: 70Mb L: 64/97 MS: 1 ChangeBit- 00:11:45.984 [2024-04-24 10:06:59.187470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.984 [2024-04-24 10:06:59.187494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.187540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.984 [2024-04-24 10:06:59.187555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.187606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.984 [2024-04-24 10:06:59.187620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.187673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:45.984 [2024-04-24 10:06:59.187685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:45.984 #48 NEW cov: 11764 ft: 14157 corp: 32/2446b lim: 100 exec/s: 48 rss: 70Mb L: 91/97 MS: 1 ShuffleBytes- 00:11:45.984 [2024-04-24 10:06:59.227447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:45.984 [2024-04-24 10:06:59.227474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.227511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:45.984 [2024-04-24 10:06:59.227525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:45.984 [2024-04-24 10:06:59.227588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:45.984 [2024-04-24 10:06:59.227602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:45.984 #49 NEW cov: 11764 ft: 14172 corp: 33/2520b lim: 100 exec/s: 49 rss: 70Mb L: 74/97 MS: 1 CopyPart- 00:11:46.243 [2024-04-24 10:06:59.267563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:46.243 [2024-04-24 10:06:59.267589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:46.243 [2024-04-24 10:06:59.267631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:46.243 [2024-04-24 10:06:59.267646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:46.243 [2024-04-24 10:06:59.267698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:46.243 [2024-04-24 10:06:59.267712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:46.243 #50 NEW cov: 11764 ft: 14185 corp: 34/2583b lim: 100 exec/s: 50 rss: 70Mb L: 63/97 MS: 1 ChangeBinInt- 00:11:46.243 [2024-04-24 10:06:59.307667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:46.243 [2024-04-24 10:06:59.307692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:46.243 [2024-04-24 10:06:59.307733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:46.243 [2024-04-24 10:06:59.307748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:46.243 [2024-04-24 10:06:59.307801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:46.243 [2024-04-24 10:06:59.307815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:46.243 #51 NEW cov: 11764 ft: 14193 corp: 35/2651b lim: 100 exec/s: 51 rss: 70Mb L: 68/97 MS: 1 EraseBytes- 00:11:46.243 [2024-04-24 10:06:59.347816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:46.243 [2024-04-24 10:06:59.347840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:46.243 [2024-04-24 10:06:59.347899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:46.243 [2024-04-24 10:06:59.347915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:46.243 [2024-04-24 10:06:59.347968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:46.243 [2024-04-24 10:06:59.347982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:46.244 #52 NEW cov: 11764 ft: 14205 corp: 36/2724b lim: 100 exec/s: 52 rss: 70Mb L: 73/97 MS: 1 CMP- DE: "\003\000\000\000\000\000\000\000"- 00:11:46.244 [2024-04-24 10:06:59.388072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:46.244 [2024-04-24 10:06:59.388098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:46.244 [2024-04-24 10:06:59.388147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:46.244 [2024-04-24 10:06:59.388163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:46.244 [2024-04-24 10:06:59.388233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:46.244 [2024-04-24 10:06:59.388247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:46.244 [2024-04-24 10:06:59.388303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:46.244 [2024-04-24 10:06:59.388318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:46.244 #53 NEW cov: 11764 ft: 14209 corp: 37/2821b lim: 100 exec/s: 53 rss: 70Mb L: 97/97 MS: 1 PersAutoDict- DE: "\003\000\000\000\000\000\000\000"- 00:11:46.244 [2024-04-24 10:06:59.428167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:46.244 [2024-04-24 10:06:59.428192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:46.244 [2024-04-24 10:06:59.428259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:46.244 [2024-04-24 10:06:59.428274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:46.244 [2024-04-24 10:06:59.428325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:46.244 [2024-04-24 10:06:59.428339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:46.244 [2024-04-24 10:06:59.428392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:46.244 [2024-04-24 10:06:59.428405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:46.244 #54 NEW cov: 11764 ft: 14223 corp: 38/2920b lim: 100 exec/s: 54 rss: 71Mb L: 99/99 MS: 1 PersAutoDict- DE: "\003\000\000\000\000\000\000\000"- 00:11:46.244 [2024-04-24 10:06:59.468393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:46.244 [2024-04-24 10:06:59.468419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:46.244 [2024-04-24 10:06:59.468472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:46.244 [2024-04-24 10:06:59.468486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:46.244 [2024-04-24 10:06:59.468536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:46.244 [2024-04-24 10:06:59.468551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:46.244 [2024-04-24 10:06:59.468602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:46.244 [2024-04-24 10:06:59.468616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:46.244 [2024-04-24 10:06:59.468668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:11:46.244 [2024-04-24 10:06:59.468683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:46.244 #55 NEW cov: 11764 ft: 14266 corp: 39/3020b lim: 100 exec/s: 55 rss: 71Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:11:46.244 [2024-04-24 10:06:59.508369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:46.244 [2024-04-24 10:06:59.508396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:46.244 [2024-04-24 10:06:59.508443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:46.244 [2024-04-24 10:06:59.508457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:46.244 [2024-04-24 10:06:59.508510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:46.244 [2024-04-24 10:06:59.508528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:46.244 [2024-04-24 10:06:59.508579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:46.244 [2024-04-24 10:06:59.508593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:46.503 [2024-04-24 10:06:59.548509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:46.503 [2024-04-24 10:06:59.548534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:46.503 [2024-04-24 10:06:59.548596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:46.503 [2024-04-24 10:06:59.548611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.548663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:46.504 [2024-04-24 10:06:59.548677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.548729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:46.504 [2024-04-24 10:06:59.548743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:46.504 #57 NEW cov: 11764 ft: 14302 corp: 40/3109b lim: 100 exec/s: 57 rss: 71Mb L: 89/100 MS: 2 CrossOver-ChangeByte- 00:11:46.504 [2024-04-24 10:06:59.588481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:46.504 [2024-04-24 10:06:59.588506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.588544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:46.504 [2024-04-24 10:06:59.588559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.588609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:46.504 [2024-04-24 10:06:59.588623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:46.504 #58 NEW cov: 11764 ft: 14354 corp: 41/3174b lim: 100 exec/s: 58 rss: 71Mb L: 65/100 MS: 1 ChangeBinInt- 00:11:46.504 [2024-04-24 10:06:59.628688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:46.504 [2024-04-24 10:06:59.628716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.628759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:46.504 [2024-04-24 10:06:59.628774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.628825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:46.504 [2024-04-24 10:06:59.628840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.628895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:46.504 [2024-04-24 10:06:59.628910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:46.504 #59 NEW cov: 11764 ft: 14359 corp: 42/3263b lim: 100 exec/s: 59 rss: 71Mb L: 89/100 MS: 1 ShuffleBytes- 00:11:46.504 [2024-04-24 10:06:59.668849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:46.504 [2024-04-24 10:06:59.668878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.668914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:46.504 [2024-04-24 10:06:59.668927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.668993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:46.504 [2024-04-24 10:06:59.669008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.669065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:46.504 [2024-04-24 10:06:59.669081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.698913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:11:46.504 [2024-04-24 10:06:59.698939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.699005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:11:46.504 [2024-04-24 10:06:59.699020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.699071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:11:46.504 [2024-04-24 10:06:59.699087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:46.504 [2024-04-24 10:06:59.699117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:11:46.504 [2024-04-24 10:06:59.699131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:46.504 #61 NEW cov: 11764 ft: 14400 corp: 43/3355b lim: 100 exec/s: 30 rss: 71Mb L: 92/100 MS: 2 InsertRepeatedBytes-ChangeBit- 00:11:46.504 #61 DONE cov: 11764 ft: 14400 corp: 43/3355b lim: 100 exec/s: 30 rss: 71Mb 00:11:46.504 ###### Recommended dictionary. ###### 00:11:46.504 "\003\000\000\000\000\000\000\000" # Uses: 2 00:11:46.504 ###### End of recommended dictionary. ###### 00:11:46.504 Done 61 runs in 2 second(s) 00:11:46.763 10:06:59 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_18.conf 00:11:46.763 10:06:59 -- ../common.sh@72 -- # (( i++ )) 00:11:46.763 10:06:59 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:46.763 10:06:59 -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:11:46.763 10:06:59 -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:11:46.763 10:06:59 -- nvmf/run.sh@24 -- # local timen=1 00:11:46.763 10:06:59 -- nvmf/run.sh@25 -- # local core=0x1 00:11:46.764 10:06:59 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:11:46.764 10:06:59 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:11:46.764 10:06:59 -- nvmf/run.sh@29 -- # printf %02d 19 00:11:46.764 10:06:59 -- nvmf/run.sh@29 -- # port=4419 00:11:46.764 10:06:59 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:11:46.764 10:06:59 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:11:46.764 10:06:59 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:46.764 10:06:59 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 -r /var/tmp/spdk19.sock 00:11:46.764 [2024-04-24 10:06:59.906290] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:46.764 [2024-04-24 10:06:59.906369] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174823 ] 00:11:46.764 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.023 [2024-04-24 10:07:00.195860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.023 [2024-04-24 10:07:00.288886] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:47.023 [2024-04-24 10:07:00.289038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.282 [2024-04-24 10:07:00.347621] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.282 [2024-04-24 10:07:00.363818] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:11:47.282 INFO: Running with entropic power schedule (0xFF, 100). 00:11:47.282 INFO: Seed: 485042329 00:11:47.282 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:47.282 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:47.282 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:11:47.282 INFO: A corpus is not provided, starting from an empty corpus 00:11:47.282 #2 INITED exec/s: 0 rss: 60Mb 00:11:47.282 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:47.282 This may also happen if the target rejected all inputs we tried so far 00:11:47.282 [2024-04-24 10:07:00.409245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:47.282 [2024-04-24 10:07:00.409279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:47.282 [2024-04-24 10:07:00.409317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:47.282 [2024-04-24 10:07:00.409334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:47.282 [2024-04-24 10:07:00.409385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:47.282 [2024-04-24 10:07:00.409400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:47.282 [2024-04-24 10:07:00.409451] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:11:47.282 [2024-04-24 10:07:00.409466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:47.541 NEW_FUNC[1/663]: 0x4a0130 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:11:47.541 NEW_FUNC[2/663]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:47.541 #7 NEW cov: 11515 ft: 11516 corp: 2/43b lim: 50 exec/s: 0 rss: 68Mb L: 42/42 MS: 5 ShuffleBytes-InsertRepeatedBytes-CrossOver-ChangeByte-InsertRepeatedBytes- 00:11:47.541 [2024-04-24 10:07:00.739859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:11:47.541 [2024-04-24 10:07:00.739899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:47.541 [2024-04-24 10:07:00.739952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:47.541 [2024-04-24 10:07:00.739967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:47.541 #9 NEW cov: 11628 ft: 12200 corp: 3/69b lim: 50 exec/s: 0 rss: 68Mb L: 26/42 MS: 2 CopyPart-InsertRepeatedBytes- 00:11:47.541 [2024-04-24 10:07:00.780094] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:47.541 [2024-04-24 10:07:00.780127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:47.541 [2024-04-24 10:07:00.780160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:47.541 [2024-04-24 10:07:00.780174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:47.541 [2024-04-24 10:07:00.780223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3602879701896396799 len:65536 00:11:47.541 [2024-04-24 10:07:00.780239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:47.541 [2024-04-24 10:07:00.780287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:11:47.541 [2024-04-24 10:07:00.780302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:47.541 #10 NEW cov: 11634 ft: 12417 corp: 4/111b lim: 50 exec/s: 0 rss: 68Mb L: 42/42 MS: 1 ChangeByte- 00:11:47.801 [2024-04-24 10:07:00.820004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:00.820033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.820091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:00.820106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:47.801 #11 NEW cov: 11719 ft: 12823 corp: 5/138b lim: 50 exec/s: 0 rss: 68Mb L: 27/42 MS: 1 InsertByte- 00:11:47.801 [2024-04-24 10:07:00.860443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:47.801 [2024-04-24 10:07:00.860471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.860515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:00.860531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.860580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:00.860594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.860641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:00.860656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.860704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:10634005409016288147 len:37780 00:11:47.801 [2024-04-24 10:07:00.860719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:47.801 #12 NEW cov: 11719 ft: 12976 corp: 6/188b lim: 50 exec/s: 0 rss: 68Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:11:47.801 [2024-04-24 10:07:00.900433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:47.801 [2024-04-24 10:07:00.900460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.900499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:00.900514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.900563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:00.900579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.900627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:00.900640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:47.801 #13 NEW cov: 11719 ft: 13056 corp: 7/230b lim: 50 exec/s: 0 rss: 68Mb L: 42/50 MS: 1 ShuffleBytes- 00:11:47.801 [2024-04-24 10:07:00.940562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:47.801 [2024-04-24 10:07:00.940589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.940625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6917529027630554975 len:65536 00:11:47.801 [2024-04-24 10:07:00.940640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.940692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:12800 00:11:47.801 [2024-04-24 10:07:00.940707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.940757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:00.940772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:47.801 #14 NEW cov: 11719 ft: 13107 corp: 8/276b lim: 50 exec/s: 0 rss: 68Mb L: 46/50 MS: 1 InsertRepeatedBytes- 00:11:47.801 [2024-04-24 10:07:00.980683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:47.801 [2024-04-24 10:07:00.980710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.980751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:00.980766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.980816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3602878610974703615 len:2348 00:11:47.801 [2024-04-24 10:07:00.980832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:00.980880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:14339461210917994140 len:65536 00:11:47.801 [2024-04-24 10:07:00.980895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:47.801 #15 NEW cov: 11719 ft: 13233 corp: 9/318b lim: 50 exec/s: 0 rss: 68Mb L: 42/50 MS: 1 CMP- DE: "\001\011+cB~\234\306"- 00:11:47.801 [2024-04-24 10:07:01.020785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:47.801 [2024-04-24 10:07:01.020818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:01.020854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65470 00:11:47.801 [2024-04-24 10:07:01.020869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:01.020933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:01.020949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:01.020998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:01.021014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:47.801 #16 NEW cov: 11719 ft: 13275 corp: 10/361b lim: 50 exec/s: 0 rss: 68Mb L: 43/50 MS: 1 InsertByte- 00:11:47.801 [2024-04-24 10:07:01.060989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:47.801 [2024-04-24 10:07:01.061019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:01.061081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:01.061098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:01.061149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551611 len:65536 00:11:47.801 [2024-04-24 10:07:01.061166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:01.061215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:11:47.801 [2024-04-24 10:07:01.061229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:47.801 [2024-04-24 10:07:01.061279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:10634005409016288147 len:37780 00:11:47.801 [2024-04-24 10:07:01.061295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:48.060 #17 NEW cov: 11719 ft: 13319 corp: 11/411b lim: 50 exec/s: 0 rss: 68Mb L: 50/50 MS: 1 ChangeBit- 00:11:48.061 [2024-04-24 10:07:01.101035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:48.061 [2024-04-24 10:07:01.101066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.101113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:48.061 [2024-04-24 10:07:01.101128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.101176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3602878610974703615 len:2348 00:11:48.061 [2024-04-24 10:07:01.101191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.101240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:14339461210917994140 len:65536 00:11:48.061 [2024-04-24 10:07:01.101257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.061 #18 NEW cov: 11719 ft: 13388 corp: 12/453b lim: 50 exec/s: 0 rss: 69Mb L: 42/50 MS: 1 ShuffleBytes- 00:11:48.061 [2024-04-24 10:07:01.141148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:48.061 [2024-04-24 10:07:01.141175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.141210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743386504257375 len:65536 00:11:48.061 [2024-04-24 10:07:01.141223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.141273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:48.061 [2024-04-24 10:07:01.141287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.141339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073696051199 len:65536 00:11:48.061 [2024-04-24 10:07:01.141355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.061 #19 NEW cov: 11719 ft: 13415 corp: 13/502b lim: 50 exec/s: 0 rss: 69Mb L: 49/50 MS: 1 InsertRepeatedBytes- 00:11:48.061 [2024-04-24 10:07:01.181353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:660730905627341313 len:32413 00:11:48.061 [2024-04-24 10:07:01.181379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.181419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744072753250303 len:65536 00:11:48.061 [2024-04-24 10:07:01.181432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.181482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:48.061 [2024-04-24 10:07:01.181496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.181546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374978039231492607 len:25411 00:11:48.061 [2024-04-24 10:07:01.181561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.181611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744071538788095 len:65536 00:11:48.061 [2024-04-24 10:07:01.181626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:48.061 #20 NEW cov: 11719 ft: 13431 corp: 14/552b lim: 50 exec/s: 0 rss: 69Mb L: 50/50 MS: 1 PersAutoDict- DE: "\001\011+cB~\234\306"- 00:11:48.061 [2024-04-24 10:07:01.221354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743017318711295 len:2611 00:11:48.061 [2024-04-24 10:07:01.221380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.221430] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:48.061 [2024-04-24 10:07:01.221446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.221501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:48.061 [2024-04-24 10:07:01.221516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.221566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:74638577311547391 len:17023 00:11:48.061 [2024-04-24 10:07:01.221581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.061 #21 NEW cov: 11719 ft: 13461 corp: 15/601b lim: 50 exec/s: 0 rss: 69Mb L: 49/50 MS: 1 CopyPart- 00:11:48.061 [2024-04-24 10:07:01.261393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:48.061 [2024-04-24 10:07:01.261420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.261455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3603171261445769738 len:25411 00:11:48.061 [2024-04-24 10:07:01.261470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.261522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744071534763871 len:65536 00:11:48.061 [2024-04-24 10:07:01.261538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.061 #22 NEW cov: 11719 ft: 13668 corp: 16/635b lim: 50 exec/s: 0 rss: 69Mb L: 34/50 MS: 1 CrossOver- 00:11:48.061 [2024-04-24 10:07:01.301556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743017318711295 len:2611 00:11:48.061 [2024-04-24 10:07:01.301582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.301627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:11:48.061 [2024-04-24 10:07:01.301642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.301709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:48.061 [2024-04-24 10:07:01.301725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.061 [2024-04-24 10:07:01.301775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:74638577311547391 len:17023 00:11:48.061 [2024-04-24 10:07:01.301791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.061 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:48.061 #23 NEW cov: 11742 ft: 13706 corp: 17/684b lim: 50 exec/s: 0 rss: 69Mb L: 49/50 MS: 1 ChangeBit- 00:11:48.321 [2024-04-24 10:07:01.351542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:48.321 [2024-04-24 10:07:01.351570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.321 [2024-04-24 10:07:01.351603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:48.321 [2024-04-24 10:07:01.351618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.321 #24 NEW cov: 11742 ft: 13737 corp: 18/705b lim: 50 exec/s: 0 rss: 69Mb L: 21/50 MS: 1 EraseBytes- 00:11:48.321 [2024-04-24 10:07:01.391736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792633534417207295 len:65536 00:11:48.321 [2024-04-24 10:07:01.391763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.321 [2024-04-24 10:07:01.391813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:48.321 [2024-04-24 10:07:01.391828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.321 [2024-04-24 10:07:01.391880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:48.321 [2024-04-24 10:07:01.391895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.321 #25 NEW cov: 11742 ft: 13746 corp: 19/736b lim: 50 exec/s: 25 rss: 69Mb L: 31/50 MS: 1 CopyPart- 00:11:48.321 [2024-04-24 10:07:01.431787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:11:48.321 [2024-04-24 10:07:01.431813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.321 [2024-04-24 10:07:01.431869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:48.321 [2024-04-24 10:07:01.431886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.321 #26 NEW cov: 11742 ft: 13768 corp: 20/763b lim: 50 exec/s: 26 rss: 69Mb L: 27/50 MS: 1 CrossOver- 00:11:48.321 [2024-04-24 10:07:01.472089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:48.321 [2024-04-24 10:07:01.472116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.321 [2024-04-24 10:07:01.472159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6917529027630554975 len:65536 00:11:48.321 [2024-04-24 10:07:01.472175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.321 [2024-04-24 10:07:01.472225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:12800 00:11:48.321 [2024-04-24 10:07:01.472240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.321 [2024-04-24 10:07:01.472289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744070186401791 len:65536 00:11:48.321 [2024-04-24 10:07:01.472304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.321 #27 NEW cov: 11742 ft: 13840 corp: 21/809b lim: 50 exec/s: 27 rss: 69Mb L: 46/50 MS: 1 ChangeBinInt- 00:11:48.321 [2024-04-24 10:07:01.511984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:48.321 [2024-04-24 10:07:01.512010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.321 [2024-04-24 10:07:01.512043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65520 00:11:48.321 [2024-04-24 10:07:01.512063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.321 #28 NEW cov: 11742 ft: 13934 corp: 22/830b lim: 50 exec/s: 28 rss: 69Mb L: 21/50 MS: 1 ChangeBit- 00:11:48.321 [2024-04-24 10:07:01.552135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:11:48.321 [2024-04-24 10:07:01.552167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.321 [2024-04-24 10:07:01.552221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:8704 00:11:48.321 [2024-04-24 10:07:01.552237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.321 #29 NEW cov: 11742 ft: 13941 corp: 23/857b lim: 50 exec/s: 29 rss: 69Mb L: 27/50 MS: 1 InsertByte- 00:11:48.321 [2024-04-24 10:07:01.592452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743017318711295 len:2611 00:11:48.321 [2024-04-24 10:07:01.592480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.321 [2024-04-24 10:07:01.592514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:48.321 [2024-04-24 10:07:01.592530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.321 [2024-04-24 10:07:01.592580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:48.321 [2024-04-24 10:07:01.592595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.321 [2024-04-24 10:07:01.592645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:74638577311547391 len:17023 00:11:48.321 [2024-04-24 10:07:01.592659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.580 #30 NEW cov: 11742 ft: 13971 corp: 24/906b lim: 50 exec/s: 30 rss: 69Mb L: 49/50 MS: 1 PersAutoDict- DE: "\001\011+cB~\234\306"- 00:11:48.580 [2024-04-24 10:07:01.632653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:48.580 [2024-04-24 10:07:01.632681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.580 [2024-04-24 10:07:01.632726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:48.580 [2024-04-24 10:07:01.632741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.580 [2024-04-24 10:07:01.632792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551611 len:65536 00:11:48.580 [2024-04-24 10:07:01.632807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.580 [2024-04-24 10:07:01.632857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:11:48.580 [2024-04-24 10:07:01.632872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.580 [2024-04-24 10:07:01.632924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:10634005409016288147 len:37780 00:11:48.580 [2024-04-24 10:07:01.632938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:48.580 #31 NEW cov: 11742 ft: 13986 corp: 25/956b lim: 50 exec/s: 31 rss: 69Mb L: 50/50 MS: 1 CopyPart- 00:11:48.581 [2024-04-24 10:07:01.672822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:48.581 [2024-04-24 10:07:01.672848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.672892] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743227600994303 len:65536 00:11:48.581 [2024-04-24 10:07:01.672907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.672960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551611 len:65536 00:11:48.581 [2024-04-24 10:07:01.672975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.673024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:11:48.581 [2024-04-24 10:07:01.673040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.673093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:10634005409016288147 len:37780 00:11:48.581 [2024-04-24 10:07:01.673107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:48.581 #32 NEW cov: 11742 ft: 14063 corp: 26/1006b lim: 50 exec/s: 32 rss: 69Mb L: 50/50 MS: 1 ChangeByte- 00:11:48.581 [2024-04-24 10:07:01.712693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:48.581 [2024-04-24 10:07:01.712719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.712753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3603171261445769738 len:25411 00:11:48.581 [2024-04-24 10:07:01.712769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.712821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744071534796127 len:65536 00:11:48.581 [2024-04-24 10:07:01.712835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.581 #33 NEW cov: 11742 ft: 14074 corp: 27/1040b lim: 50 exec/s: 33 rss: 69Mb L: 34/50 MS: 1 ChangeByte- 00:11:48.581 [2024-04-24 10:07:01.752909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743017318711295 len:2595 00:11:48.581 [2024-04-24 10:07:01.752936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.752980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:48.581 [2024-04-24 10:07:01.752997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.753046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:48.581 [2024-04-24 10:07:01.753064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.753114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:74638577311547391 len:17023 00:11:48.581 [2024-04-24 10:07:01.753129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.581 #34 NEW cov: 11742 ft: 14081 corp: 28/1089b lim: 50 exec/s: 34 rss: 69Mb L: 49/50 MS: 1 ChangeBit- 00:11:48.581 [2024-04-24 10:07:01.792926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:48.581 [2024-04-24 10:07:01.792955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.792990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3603171261445769738 len:25411 00:11:48.581 [2024-04-24 10:07:01.793005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.793056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:660730907579080449 len:32413 00:11:48.581 [2024-04-24 10:07:01.793077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.581 #35 NEW cov: 11742 ft: 14094 corp: 29/1123b lim: 50 exec/s: 35 rss: 69Mb L: 34/50 MS: 1 PersAutoDict- DE: "\001\011+cB~\234\306"- 00:11:48.581 [2024-04-24 10:07:01.833214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743017318711295 len:2595 00:11:48.581 [2024-04-24 10:07:01.833241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.833288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:48.581 [2024-04-24 10:07:01.833303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.833369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:48.581 [2024-04-24 10:07:01.833385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.833437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374978039218002943 len:25411 00:11:48.581 [2024-04-24 10:07:01.833452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.581 [2024-04-24 10:07:01.833506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:3126415677382721801 len:40135 00:11:48.581 [2024-04-24 10:07:01.833522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:48.840 #36 NEW cov: 11742 ft: 14155 corp: 30/1173b lim: 50 exec/s: 36 rss: 69Mb L: 50/50 MS: 1 InsertByte- 00:11:48.840 [2024-04-24 10:07:01.873283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743017318711295 len:2611 00:11:48.840 [2024-04-24 10:07:01.873309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.840 [2024-04-24 10:07:01.873355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:48.840 [2024-04-24 10:07:01.873371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.840 [2024-04-24 10:07:01.873421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:48.840 [2024-04-24 10:07:01.873436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.840 [2024-04-24 10:07:01.873489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:74638577311547391 len:17023 00:11:48.840 [2024-04-24 10:07:01.873503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.840 #37 NEW cov: 11742 ft: 14193 corp: 31/1222b lim: 50 exec/s: 37 rss: 70Mb L: 49/50 MS: 1 CopyPart- 00:11:48.840 [2024-04-24 10:07:01.913375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:48.840 [2024-04-24 10:07:01.913401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.840 [2024-04-24 10:07:01.913445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65470 00:11:48.840 [2024-04-24 10:07:01.913460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.840 [2024-04-24 10:07:01.913511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:48.840 [2024-04-24 10:07:01.913525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.840 [2024-04-24 10:07:01.913576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:74638577325047807 len:17023 00:11:48.840 [2024-04-24 10:07:01.913591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.840 #38 NEW cov: 11742 ft: 14246 corp: 32/1265b lim: 50 exec/s: 38 rss: 70Mb L: 43/50 MS: 1 PersAutoDict- DE: "\001\011+cB~\234\306"- 00:11:48.840 [2024-04-24 10:07:01.953201] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7152418370170718507 len:50780 00:11:48.840 [2024-04-24 10:07:01.953228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.841 #41 NEW cov: 11742 ft: 14557 corp: 33/1275b lim: 50 exec/s: 41 rss: 70Mb L: 10/50 MS: 3 InsertByte-ChangeByte-PersAutoDict- DE: "\001\011+cB~\234\306"- 00:11:48.841 [2024-04-24 10:07:01.993750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:48.841 [2024-04-24 10:07:01.993777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.841 [2024-04-24 10:07:01.993828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 00:11:48.841 [2024-04-24 10:07:01.993844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.841 [2024-04-24 10:07:01.993894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:844420635164672 len:65536 00:11:48.841 [2024-04-24 10:07:01.993909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.841 [2024-04-24 10:07:01.993958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:11:48.841 [2024-04-24 10:07:01.993971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.841 [2024-04-24 10:07:01.994023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:10634005409016288147 len:37780 00:11:48.841 [2024-04-24 10:07:01.994037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:48.841 #42 NEW cov: 11742 ft: 14568 corp: 34/1325b lim: 50 exec/s: 42 rss: 70Mb L: 50/50 MS: 1 ChangeBinInt- 00:11:48.841 [2024-04-24 10:07:02.033717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743017318711295 len:2611 00:11:48.841 [2024-04-24 10:07:02.033744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.841 [2024-04-24 10:07:02.033785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:48.841 [2024-04-24 10:07:02.033803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.841 [2024-04-24 10:07:02.033852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:48.841 [2024-04-24 10:07:02.033866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.841 [2024-04-24 10:07:02.033917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:218753765387403263 len:17023 00:11:48.841 [2024-04-24 10:07:02.033931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.841 #43 NEW cov: 11742 ft: 14572 corp: 35/1374b lim: 50 exec/s: 43 rss: 70Mb L: 49/50 MS: 1 ChangeBit- 00:11:48.841 [2024-04-24 10:07:02.073859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:11:48.841 [2024-04-24 10:07:02.073889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.841 [2024-04-24 10:07:02.073923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:48.841 [2024-04-24 10:07:02.073939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:48.841 [2024-04-24 10:07:02.073988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:48.841 [2024-04-24 10:07:02.074004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:48.841 [2024-04-24 10:07:02.074057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:11:48.841 [2024-04-24 10:07:02.074078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:48.841 #44 NEW cov: 11742 ft: 14578 corp: 36/1419b lim: 50 exec/s: 44 rss: 70Mb L: 45/50 MS: 1 CopyPart- 00:11:48.841 [2024-04-24 10:07:02.113811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:11:48.841 [2024-04-24 10:07:02.113838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:48.841 [2024-04-24 10:07:02.113873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:48.841 [2024-04-24 10:07:02.113889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:49.100 #45 NEW cov: 11742 ft: 14596 corp: 37/1446b lim: 50 exec/s: 45 rss: 70Mb L: 27/50 MS: 1 CMP- DE: "\377\377"- 00:11:49.100 [2024-04-24 10:07:02.154083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743017318711295 len:2611 00:11:49.100 [2024-04-24 10:07:02.154111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:49.100 [2024-04-24 10:07:02.154158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:49.100 [2024-04-24 10:07:02.154173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:49.100 [2024-04-24 10:07:02.154223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:49.100 [2024-04-24 10:07:02.154236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.154290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:11:49.101 [2024-04-24 10:07:02.154305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:49.101 #46 NEW cov: 11742 ft: 14616 corp: 38/1488b lim: 50 exec/s: 46 rss: 70Mb L: 42/50 MS: 1 CrossOver- 00:11:49.101 [2024-04-24 10:07:02.193862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:11:49.101 [2024-04-24 10:07:02.193889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:49.101 #47 NEW cov: 11742 ft: 14634 corp: 39/1502b lim: 50 exec/s: 47 rss: 70Mb L: 14/50 MS: 1 EraseBytes- 00:11:49.101 [2024-04-24 10:07:02.234468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743017318711295 len:2611 00:11:49.101 [2024-04-24 10:07:02.234496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.234542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:49.101 [2024-04-24 10:07:02.234557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.234608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:49.101 [2024-04-24 10:07:02.234622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.234673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18375540989184913919 len:25411 00:11:49.101 [2024-04-24 10:07:02.234687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.234739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744071538788095 len:65536 00:11:49.101 [2024-04-24 10:07:02.234753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:49.101 #48 NEW cov: 11742 ft: 14651 corp: 40/1552b lim: 50 exec/s: 48 rss: 70Mb L: 50/50 MS: 1 CopyPart- 00:11:49.101 [2024-04-24 10:07:02.274441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:49.101 [2024-04-24 10:07:02.274468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.274503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:11:49.101 [2024-04-24 10:07:02.274519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.274571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551611 len:65536 00:11:49.101 [2024-04-24 10:07:02.274587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.274638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:11:49.101 [2024-04-24 10:07:02.274652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:49.101 #49 NEW cov: 11742 ft: 14667 corp: 41/1600b lim: 50 exec/s: 49 rss: 70Mb L: 48/50 MS: 1 EraseBytes- 00:11:49.101 [2024-04-24 10:07:02.314538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16429131436521042687 len:65536 00:11:49.101 [2024-04-24 10:07:02.314568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.314603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6872493031367335775 len:65536 00:11:49.101 [2024-04-24 10:07:02.314618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.314668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65330 00:11:49.101 [2024-04-24 10:07:02.314682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.314734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073695789311 len:65536 00:11:49.101 [2024-04-24 10:07:02.314748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:49.101 #50 NEW cov: 11742 ft: 14689 corp: 42/1647b lim: 50 exec/s: 50 rss: 70Mb L: 47/50 MS: 1 InsertByte- 00:11:49.101 [2024-04-24 10:07:02.354683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743017318711295 len:2611 00:11:49.101 [2024-04-24 10:07:02.354710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.354752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:11:49.101 [2024-04-24 10:07:02.354768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.354818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:49.101 [2024-04-24 10:07:02.354834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:49.101 [2024-04-24 10:07:02.354886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:579041735577042943 len:17023 00:11:49.101 [2024-04-24 10:07:02.354902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:49.360 #51 NEW cov: 11742 ft: 14707 corp: 43/1696b lim: 50 exec/s: 51 rss: 70Mb L: 49/50 MS: 1 ChangeBinInt- 00:11:49.360 [2024-04-24 10:07:02.394791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069583024895 len:65536 00:11:49.360 [2024-04-24 10:07:02.394818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:49.360 [2024-04-24 10:07:02.394877] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14267402932304437087 len:65536 00:11:49.360 [2024-04-24 10:07:02.394893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:49.360 [2024-04-24 10:07:02.394944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:11:49.360 [2024-04-24 10:07:02.394959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:49.360 [2024-04-24 10:07:02.395011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073696051199 len:65536 00:11:49.360 [2024-04-24 10:07:02.395026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:49.360 #52 NEW cov: 11742 ft: 14711 corp: 44/1745b lim: 50 exec/s: 26 rss: 70Mb L: 49/50 MS: 1 ChangeByte- 00:11:49.360 #52 DONE cov: 11742 ft: 14711 corp: 44/1745b lim: 50 exec/s: 26 rss: 70Mb 00:11:49.360 ###### Recommended dictionary. ###### 00:11:49.360 "\001\011+cB~\234\306" # Uses: 5 00:11:49.360 "\377\377" # Uses: 0 00:11:49.360 ###### End of recommended dictionary. ###### 00:11:49.360 Done 52 runs in 2 second(s) 00:11:49.360 10:07:02 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_19.conf 00:11:49.360 10:07:02 -- ../common.sh@72 -- # (( i++ )) 00:11:49.360 10:07:02 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:49.360 10:07:02 -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:11:49.360 10:07:02 -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:11:49.360 10:07:02 -- nvmf/run.sh@24 -- # local timen=1 00:11:49.360 10:07:02 -- nvmf/run.sh@25 -- # local core=0x1 00:11:49.360 10:07:02 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:11:49.360 10:07:02 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:11:49.360 10:07:02 -- nvmf/run.sh@29 -- # printf %02d 20 00:11:49.360 10:07:02 -- nvmf/run.sh@29 -- # port=4420 00:11:49.360 10:07:02 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:11:49.360 10:07:02 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:11:49.360 10:07:02 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:49.360 10:07:02 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 -r /var/tmp/spdk20.sock 00:11:49.360 [2024-04-24 10:07:02.602628] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:49.360 [2024-04-24 10:07:02.602706] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175190 ] 00:11:49.618 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.878 [2024-04-24 10:07:02.921221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.878 [2024-04-24 10:07:03.014277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:49.878 [2024-04-24 10:07:03.014402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.878 [2024-04-24 10:07:03.072790] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.878 [2024-04-24 10:07:03.088982] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:49.878 INFO: Running with entropic power schedule (0xFF, 100). 00:11:49.878 INFO: Seed: 3211053730 00:11:49.878 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:49.878 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:49.878 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:11:49.878 INFO: A corpus is not provided, starting from an empty corpus 00:11:49.878 #2 INITED exec/s: 0 rss: 61Mb 00:11:49.878 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:49.878 This may also happen if the target rejected all inputs we tried so far 00:11:49.878 [2024-04-24 10:07:03.134265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:49.878 [2024-04-24 10:07:03.134296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:49.878 [2024-04-24 10:07:03.134351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:49.878 [2024-04-24 10:07:03.134367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.395 NEW_FUNC[1/662]: 0x4a1cf0 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:11:50.395 NEW_FUNC[2/662]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:50.395 #4 NEW cov: 11549 ft: 11550 corp: 2/44b lim: 90 exec/s: 0 rss: 67Mb L: 43/43 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:11:50.395 [2024-04-24 10:07:03.464943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.395 [2024-04-24 10:07:03.464985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.395 NEW_FUNC[1/3]: 0x1533be0 in nvme_ctrlr_process_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3789 00:11:50.395 NEW_FUNC[2/3]: 0x1701fc0 in spdk_nvme_probe_poll_async /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme.c:1507 00:11:50.395 #9 NEW cov: 11686 ft: 12708 corp: 3/67b lim: 90 exec/s: 0 rss: 67Mb L: 23/43 MS: 5 CrossOver-InsertByte-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:11:50.395 [2024-04-24 10:07:03.515150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.395 [2024-04-24 10:07:03.515179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.395 [2024-04-24 10:07:03.515231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.395 [2024-04-24 10:07:03.515246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.395 #15 NEW cov: 11692 ft: 13065 corp: 4/110b lim: 90 exec/s: 0 rss: 68Mb L: 43/43 MS: 1 CopyPart- 00:11:50.395 [2024-04-24 10:07:03.555244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.395 [2024-04-24 10:07:03.555273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.395 [2024-04-24 10:07:03.555326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.395 [2024-04-24 10:07:03.555343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.395 #20 NEW cov: 11777 ft: 13451 corp: 5/150b lim: 90 exec/s: 0 rss: 68Mb L: 40/43 MS: 5 ChangeByte-ShuffleBytes-ChangeBit-InsertRepeatedBytes-InsertRepeatedBytes- 00:11:50.395 [2024-04-24 10:07:03.595220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.395 [2024-04-24 10:07:03.595247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.395 #21 NEW cov: 11777 ft: 13531 corp: 6/169b lim: 90 exec/s: 0 rss: 68Mb L: 19/43 MS: 1 EraseBytes- 00:11:50.395 [2024-04-24 10:07:03.635454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.395 [2024-04-24 10:07:03.635480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.395 [2024-04-24 10:07:03.635521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.395 [2024-04-24 10:07:03.635536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.395 #22 NEW cov: 11777 ft: 13613 corp: 7/213b lim: 90 exec/s: 0 rss: 68Mb L: 44/44 MS: 1 InsertByte- 00:11:50.654 [2024-04-24 10:07:03.675607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.654 [2024-04-24 10:07:03.675635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.654 [2024-04-24 10:07:03.675678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.654 [2024-04-24 10:07:03.675694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.654 #28 NEW cov: 11777 ft: 13654 corp: 8/257b lim: 90 exec/s: 0 rss: 68Mb L: 44/44 MS: 1 CrossOver- 00:11:50.654 [2024-04-24 10:07:03.715709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.654 [2024-04-24 10:07:03.715737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.654 [2024-04-24 10:07:03.715807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.654 [2024-04-24 10:07:03.715824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.654 #29 NEW cov: 11777 ft: 13717 corp: 9/297b lim: 90 exec/s: 0 rss: 68Mb L: 40/44 MS: 1 ShuffleBytes- 00:11:50.654 [2024-04-24 10:07:03.755802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.654 [2024-04-24 10:07:03.755829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.654 [2024-04-24 10:07:03.755864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.654 [2024-04-24 10:07:03.755879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.654 #30 NEW cov: 11777 ft: 13766 corp: 10/343b lim: 90 exec/s: 0 rss: 68Mb L: 46/46 MS: 1 InsertRepeatedBytes- 00:11:50.654 [2024-04-24 10:07:03.795954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.654 [2024-04-24 10:07:03.795981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.654 [2024-04-24 10:07:03.796024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.654 [2024-04-24 10:07:03.796038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.654 #31 NEW cov: 11777 ft: 13809 corp: 11/384b lim: 90 exec/s: 0 rss: 68Mb L: 41/46 MS: 1 InsertByte- 00:11:50.654 [2024-04-24 10:07:03.835934] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.654 [2024-04-24 10:07:03.835961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.654 #32 NEW cov: 11777 ft: 13836 corp: 12/405b lim: 90 exec/s: 0 rss: 68Mb L: 21/46 MS: 1 EraseBytes- 00:11:50.654 [2024-04-24 10:07:03.876165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.654 [2024-04-24 10:07:03.876191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.654 [2024-04-24 10:07:03.876228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.654 [2024-04-24 10:07:03.876243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.654 #33 NEW cov: 11777 ft: 13868 corp: 13/446b lim: 90 exec/s: 0 rss: 68Mb L: 41/46 MS: 1 ChangeBit- 00:11:50.654 [2024-04-24 10:07:03.916581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.654 [2024-04-24 10:07:03.916607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.654 [2024-04-24 10:07:03.916655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.654 [2024-04-24 10:07:03.916670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.654 [2024-04-24 10:07:03.916722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:11:50.654 [2024-04-24 10:07:03.916736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:50.654 [2024-04-24 10:07:03.916790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:11:50.654 [2024-04-24 10:07:03.916805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:50.913 #34 NEW cov: 11777 ft: 14279 corp: 14/520b lim: 90 exec/s: 0 rss: 68Mb L: 74/74 MS: 1 InsertRepeatedBytes- 00:11:50.913 [2024-04-24 10:07:03.956389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.913 [2024-04-24 10:07:03.956416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.914 [2024-04-24 10:07:03.956458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.914 [2024-04-24 10:07:03.956473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.914 #40 NEW cov: 11777 ft: 14337 corp: 15/565b lim: 90 exec/s: 0 rss: 68Mb L: 45/74 MS: 1 InsertByte- 00:11:50.914 [2024-04-24 10:07:03.996821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.914 [2024-04-24 10:07:03.996847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.914 [2024-04-24 10:07:03.996893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.914 [2024-04-24 10:07:03.996908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.914 [2024-04-24 10:07:03.996976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:11:50.914 [2024-04-24 10:07:03.996991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:50.914 [2024-04-24 10:07:03.997044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:11:50.914 [2024-04-24 10:07:03.997064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:50.914 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:50.914 #41 NEW cov: 11800 ft: 14356 corp: 16/651b lim: 90 exec/s: 0 rss: 68Mb L: 86/86 MS: 1 CrossOver- 00:11:50.914 [2024-04-24 10:07:04.046643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.914 [2024-04-24 10:07:04.046669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.914 [2024-04-24 10:07:04.046704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.914 [2024-04-24 10:07:04.046718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.914 #42 NEW cov: 11800 ft: 14412 corp: 17/701b lim: 90 exec/s: 0 rss: 69Mb L: 50/86 MS: 1 CrossOver- 00:11:50.914 [2024-04-24 10:07:04.086686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.914 [2024-04-24 10:07:04.086711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.914 #43 NEW cov: 11800 ft: 14426 corp: 18/724b lim: 90 exec/s: 0 rss: 69Mb L: 23/86 MS: 1 ShuffleBytes- 00:11:50.914 [2024-04-24 10:07:04.126851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.914 [2024-04-24 10:07:04.126878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.914 [2024-04-24 10:07:04.126930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.914 [2024-04-24 10:07:04.126951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.914 #44 NEW cov: 11800 ft: 14485 corp: 19/768b lim: 90 exec/s: 44 rss: 69Mb L: 44/86 MS: 1 CMP- DE: "\000\000\000\010"- 00:11:50.914 [2024-04-24 10:07:04.167000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:50.914 [2024-04-24 10:07:04.167026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:50.914 [2024-04-24 10:07:04.167089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:50.914 [2024-04-24 10:07:04.167105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:50.914 #45 NEW cov: 11800 ft: 14510 corp: 20/808b lim: 90 exec/s: 45 rss: 69Mb L: 40/86 MS: 1 ChangeBinInt- 00:11:51.173 [2024-04-24 10:07:04.197082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.173 [2024-04-24 10:07:04.197108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.173 [2024-04-24 10:07:04.197145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.173 [2024-04-24 10:07:04.197160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.173 #46 NEW cov: 11800 ft: 14574 corp: 21/854b lim: 90 exec/s: 46 rss: 69Mb L: 46/86 MS: 1 ChangeByte- 00:11:51.173 [2024-04-24 10:07:04.237215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.173 [2024-04-24 10:07:04.237241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.173 [2024-04-24 10:07:04.237285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.173 [2024-04-24 10:07:04.237300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.173 #47 NEW cov: 11800 ft: 14588 corp: 22/899b lim: 90 exec/s: 47 rss: 69Mb L: 45/86 MS: 1 InsertByte- 00:11:51.173 [2024-04-24 10:07:04.277300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.173 [2024-04-24 10:07:04.277327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.173 [2024-04-24 10:07:04.277365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.173 [2024-04-24 10:07:04.277381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.173 #48 NEW cov: 11800 ft: 14599 corp: 23/944b lim: 90 exec/s: 48 rss: 69Mb L: 45/86 MS: 1 EraseBytes- 00:11:51.173 [2024-04-24 10:07:04.317401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.173 [2024-04-24 10:07:04.317427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.173 [2024-04-24 10:07:04.317465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.173 [2024-04-24 10:07:04.317480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.173 #49 NEW cov: 11800 ft: 14617 corp: 24/987b lim: 90 exec/s: 49 rss: 69Mb L: 43/86 MS: 1 ChangeBinInt- 00:11:51.173 [2024-04-24 10:07:04.357840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.173 [2024-04-24 10:07:04.357866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.173 [2024-04-24 10:07:04.357903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.173 [2024-04-24 10:07:04.357920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.173 [2024-04-24 10:07:04.357972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:11:51.173 [2024-04-24 10:07:04.357987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:51.173 [2024-04-24 10:07:04.358040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:11:51.173 [2024-04-24 10:07:04.358056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:51.173 #50 NEW cov: 11800 ft: 14627 corp: 25/1073b lim: 90 exec/s: 50 rss: 69Mb L: 86/86 MS: 1 ShuffleBytes- 00:11:51.173 [2024-04-24 10:07:04.397658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.173 [2024-04-24 10:07:04.397685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.173 [2024-04-24 10:07:04.397732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.173 [2024-04-24 10:07:04.397747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.173 #51 NEW cov: 11800 ft: 14648 corp: 26/1118b lim: 90 exec/s: 51 rss: 69Mb L: 45/86 MS: 1 PersAutoDict- DE: "\000\000\000\010"- 00:11:51.173 [2024-04-24 10:07:04.437801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.173 [2024-04-24 10:07:04.437828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.173 [2024-04-24 10:07:04.437865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.173 [2024-04-24 10:07:04.437880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.433 #52 NEW cov: 11800 ft: 14693 corp: 27/1163b lim: 90 exec/s: 52 rss: 70Mb L: 45/86 MS: 1 ChangeBinInt- 00:11:51.433 [2024-04-24 10:07:04.477898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.433 [2024-04-24 10:07:04.477924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.433 [2024-04-24 10:07:04.477962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.433 [2024-04-24 10:07:04.477978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.433 #53 NEW cov: 11800 ft: 14706 corp: 28/1208b lim: 90 exec/s: 53 rss: 70Mb L: 45/86 MS: 1 ChangeBit- 00:11:51.433 [2024-04-24 10:07:04.517877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.433 [2024-04-24 10:07:04.517903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.433 #54 NEW cov: 11800 ft: 14710 corp: 29/1239b lim: 90 exec/s: 54 rss: 70Mb L: 31/86 MS: 1 EraseBytes- 00:11:51.433 [2024-04-24 10:07:04.558445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.433 [2024-04-24 10:07:04.558472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.433 [2024-04-24 10:07:04.558535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.433 [2024-04-24 10:07:04.558551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.433 [2024-04-24 10:07:04.558605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:11:51.433 [2024-04-24 10:07:04.558623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:51.433 [2024-04-24 10:07:04.558677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:11:51.433 [2024-04-24 10:07:04.558692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:51.433 #55 NEW cov: 11800 ft: 14724 corp: 30/1322b lim: 90 exec/s: 55 rss: 70Mb L: 83/86 MS: 1 CrossOver- 00:11:51.433 [2024-04-24 10:07:04.598241] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.433 [2024-04-24 10:07:04.598267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.433 [2024-04-24 10:07:04.598303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.433 [2024-04-24 10:07:04.598319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.433 #56 NEW cov: 11800 ft: 14784 corp: 31/1362b lim: 90 exec/s: 56 rss: 70Mb L: 40/86 MS: 1 ChangeBit- 00:11:51.433 [2024-04-24 10:07:04.638225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.433 [2024-04-24 10:07:04.638250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.433 #57 NEW cov: 11800 ft: 14866 corp: 32/1395b lim: 90 exec/s: 57 rss: 70Mb L: 33/86 MS: 1 CrossOver- 00:11:51.433 [2024-04-24 10:07:04.678293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.433 [2024-04-24 10:07:04.678320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.433 #58 NEW cov: 11800 ft: 14871 corp: 33/1418b lim: 90 exec/s: 58 rss: 70Mb L: 23/86 MS: 1 ChangeByte- 00:11:51.693 [2024-04-24 10:07:04.718561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.693 [2024-04-24 10:07:04.718587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.693 [2024-04-24 10:07:04.718643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.693 [2024-04-24 10:07:04.718659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.693 #59 NEW cov: 11800 ft: 14936 corp: 34/1465b lim: 90 exec/s: 59 rss: 70Mb L: 47/86 MS: 1 PersAutoDict- DE: "\000\000\000\010"- 00:11:51.693 [2024-04-24 10:07:04.758686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.693 [2024-04-24 10:07:04.758713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.693 [2024-04-24 10:07:04.758767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.693 [2024-04-24 10:07:04.758783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.693 #60 NEW cov: 11800 ft: 14948 corp: 35/1511b lim: 90 exec/s: 60 rss: 70Mb L: 46/86 MS: 1 InsertByte- 00:11:51.693 [2024-04-24 10:07:04.788786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.693 [2024-04-24 10:07:04.788813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.693 [2024-04-24 10:07:04.788864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.693 [2024-04-24 10:07:04.788881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.693 #61 NEW cov: 11800 ft: 14989 corp: 36/1556b lim: 90 exec/s: 61 rss: 70Mb L: 45/86 MS: 1 ChangeBit- 00:11:51.693 [2024-04-24 10:07:04.828907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.693 [2024-04-24 10:07:04.828934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.693 [2024-04-24 10:07:04.828977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.693 [2024-04-24 10:07:04.828993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.693 #62 NEW cov: 11800 ft: 15015 corp: 37/1603b lim: 90 exec/s: 62 rss: 70Mb L: 47/86 MS: 1 InsertByte- 00:11:51.693 [2024-04-24 10:07:04.868998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.693 [2024-04-24 10:07:04.869025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.693 [2024-04-24 10:07:04.869097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.693 [2024-04-24 10:07:04.869113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.693 #63 NEW cov: 11800 ft: 15081 corp: 38/1644b lim: 90 exec/s: 63 rss: 70Mb L: 41/86 MS: 1 InsertByte- 00:11:51.693 [2024-04-24 10:07:04.909077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.693 [2024-04-24 10:07:04.909105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.693 [2024-04-24 10:07:04.909141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.693 [2024-04-24 10:07:04.909157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.693 #64 NEW cov: 11800 ft: 15154 corp: 39/1691b lim: 90 exec/s: 64 rss: 70Mb L: 47/86 MS: 1 CopyPart- 00:11:51.693 [2024-04-24 10:07:04.949040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.693 [2024-04-24 10:07:04.949071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.952 #70 NEW cov: 11800 ft: 15229 corp: 40/1724b lim: 90 exec/s: 70 rss: 70Mb L: 33/86 MS: 1 EraseBytes- 00:11:51.952 [2024-04-24 10:07:04.989193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.952 [2024-04-24 10:07:04.989219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.952 #71 NEW cov: 11800 ft: 15242 corp: 41/1743b lim: 90 exec/s: 71 rss: 70Mb L: 19/86 MS: 1 ChangeASCIIInt- 00:11:51.952 [2024-04-24 10:07:05.029728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.952 [2024-04-24 10:07:05.029755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.952 [2024-04-24 10:07:05.029802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:11:51.952 [2024-04-24 10:07:05.029817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:51.952 [2024-04-24 10:07:05.029869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:11:51.952 [2024-04-24 10:07:05.029884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:51.952 [2024-04-24 10:07:05.029935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:11:51.952 [2024-04-24 10:07:05.029950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:51.952 #72 NEW cov: 11800 ft: 15284 corp: 42/1817b lim: 90 exec/s: 72 rss: 70Mb L: 74/86 MS: 1 ChangeBinInt- 00:11:51.952 [2024-04-24 10:07:05.069418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.952 [2024-04-24 10:07:05.069446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.952 #77 NEW cov: 11800 ft: 15288 corp: 43/1835b lim: 90 exec/s: 77 rss: 70Mb L: 18/86 MS: 5 ChangeByte-ChangeBinInt-CMP-ChangeBinInt-CopyPart- DE: "\377\377\377\377\377\377\377\377"- 00:11:51.952 [2024-04-24 10:07:05.109498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:11:51.952 [2024-04-24 10:07:05.109525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:51.952 #78 NEW cov: 11800 ft: 15295 corp: 44/1858b lim: 90 exec/s: 39 rss: 70Mb L: 23/86 MS: 1 ChangeByte- 00:11:51.952 #78 DONE cov: 11800 ft: 15295 corp: 44/1858b lim: 90 exec/s: 39 rss: 70Mb 00:11:51.952 ###### Recommended dictionary. ###### 00:11:51.952 "\000\000\000\010" # Uses: 3 00:11:51.952 "\377\377\377\377\377\377\377\377" # Uses: 0 00:11:51.952 ###### End of recommended dictionary. ###### 00:11:51.952 Done 78 runs in 2 second(s) 00:11:52.211 10:07:05 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_20.conf 00:11:52.211 10:07:05 -- ../common.sh@72 -- # (( i++ )) 00:11:52.211 10:07:05 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:52.211 10:07:05 -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:11:52.211 10:07:05 -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:11:52.211 10:07:05 -- nvmf/run.sh@24 -- # local timen=1 00:11:52.211 10:07:05 -- nvmf/run.sh@25 -- # local core=0x1 00:11:52.211 10:07:05 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:11:52.211 10:07:05 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:11:52.211 10:07:05 -- nvmf/run.sh@29 -- # printf %02d 21 00:11:52.211 10:07:05 -- nvmf/run.sh@29 -- # port=4421 00:11:52.211 10:07:05 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:11:52.211 10:07:05 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:11:52.211 10:07:05 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:52.211 10:07:05 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 -r /var/tmp/spdk21.sock 00:11:52.211 [2024-04-24 10:07:05.326287] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:52.212 [2024-04-24 10:07:05.326359] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175560 ] 00:11:52.212 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.471 [2024-04-24 10:07:05.635811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.471 [2024-04-24 10:07:05.728013] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:52.471 [2024-04-24 10:07:05.728163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.730 [2024-04-24 10:07:05.787145] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.730 [2024-04-24 10:07:05.803338] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:11:52.730 INFO: Running with entropic power schedule (0xFF, 100). 00:11:52.730 INFO: Seed: 1631067074 00:11:52.730 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:52.730 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:52.730 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:11:52.730 INFO: A corpus is not provided, starting from an empty corpus 00:11:52.730 #2 INITED exec/s: 0 rss: 61Mb 00:11:52.730 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:52.730 This may also happen if the target rejected all inputs we tried so far 00:11:52.730 [2024-04-24 10:07:05.859065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:52.730 [2024-04-24 10:07:05.859095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:52.730 [2024-04-24 10:07:05.859133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:52.730 [2024-04-24 10:07:05.859149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:52.730 [2024-04-24 10:07:05.859204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:52.730 [2024-04-24 10:07:05.859219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:52.730 [2024-04-24 10:07:05.859275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:52.730 [2024-04-24 10:07:05.859290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:52.989 NEW_FUNC[1/665]: 0x4a4f10 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:11:52.989 NEW_FUNC[2/665]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:52.989 #28 NEW cov: 11548 ft: 11549 corp: 2/43b lim: 50 exec/s: 0 rss: 67Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:11:52.989 [2024-04-24 10:07:06.169614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:52.989 [2024-04-24 10:07:06.169664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:52.989 [2024-04-24 10:07:06.169729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:52.989 [2024-04-24 10:07:06.169750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:52.989 [2024-04-24 10:07:06.169811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:52.989 [2024-04-24 10:07:06.169832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:52.989 #29 NEW cov: 11661 ft: 12343 corp: 3/73b lim: 50 exec/s: 0 rss: 67Mb L: 30/42 MS: 1 InsertRepeatedBytes- 00:11:52.989 [2024-04-24 10:07:06.209593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:52.989 [2024-04-24 10:07:06.209622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:52.989 [2024-04-24 10:07:06.209667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:52.989 [2024-04-24 10:07:06.209680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:52.989 [2024-04-24 10:07:06.209733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:52.989 [2024-04-24 10:07:06.209748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:52.989 #30 NEW cov: 11667 ft: 12462 corp: 4/103b lim: 50 exec/s: 0 rss: 68Mb L: 30/42 MS: 1 ChangeByte- 00:11:52.989 [2024-04-24 10:07:06.249701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:52.989 [2024-04-24 10:07:06.249728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:52.989 [2024-04-24 10:07:06.249789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:52.989 [2024-04-24 10:07:06.249805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:52.989 [2024-04-24 10:07:06.249857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:52.989 [2024-04-24 10:07:06.249872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.248 #31 NEW cov: 11752 ft: 12672 corp: 5/133b lim: 50 exec/s: 0 rss: 68Mb L: 30/42 MS: 1 ChangeBinInt- 00:11:53.249 [2024-04-24 10:07:06.289711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.249 [2024-04-24 10:07:06.289738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.249 [2024-04-24 10:07:06.289798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.249 [2024-04-24 10:07:06.289814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.249 #33 NEW cov: 11752 ft: 13066 corp: 6/161b lim: 50 exec/s: 0 rss: 68Mb L: 28/42 MS: 2 ShuffleBytes-CrossOver- 00:11:53.249 [2024-04-24 10:07:06.329607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.249 [2024-04-24 10:07:06.329634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.249 #36 NEW cov: 11752 ft: 13985 corp: 7/180b lim: 50 exec/s: 0 rss: 68Mb L: 19/42 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:11:53.249 [2024-04-24 10:07:06.370214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.249 [2024-04-24 10:07:06.370241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.249 [2024-04-24 10:07:06.370288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.249 [2024-04-24 10:07:06.370304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.249 [2024-04-24 10:07:06.370357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.249 [2024-04-24 10:07:06.370372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.249 [2024-04-24 10:07:06.370423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:53.249 [2024-04-24 10:07:06.370438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:53.249 #37 NEW cov: 11752 ft: 14132 corp: 8/223b lim: 50 exec/s: 0 rss: 68Mb L: 43/43 MS: 1 InsertByte- 00:11:53.249 [2024-04-24 10:07:06.409884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.249 [2024-04-24 10:07:06.409910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.249 #38 NEW cov: 11752 ft: 14183 corp: 9/240b lim: 50 exec/s: 0 rss: 68Mb L: 17/43 MS: 1 EraseBytes- 00:11:53.249 [2024-04-24 10:07:06.450451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.249 [2024-04-24 10:07:06.450478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.249 [2024-04-24 10:07:06.450520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.249 [2024-04-24 10:07:06.450536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.249 [2024-04-24 10:07:06.450604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.249 [2024-04-24 10:07:06.450620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.249 [2024-04-24 10:07:06.450670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:53.249 [2024-04-24 10:07:06.450685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:53.249 #39 NEW cov: 11752 ft: 14230 corp: 10/283b lim: 50 exec/s: 0 rss: 68Mb L: 43/43 MS: 1 ChangeBinInt- 00:11:53.249 [2024-04-24 10:07:06.490417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.249 [2024-04-24 10:07:06.490445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.249 [2024-04-24 10:07:06.490486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.249 [2024-04-24 10:07:06.490502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.249 [2024-04-24 10:07:06.490553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.249 [2024-04-24 10:07:06.490568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.249 #40 NEW cov: 11752 ft: 14314 corp: 11/313b lim: 50 exec/s: 0 rss: 68Mb L: 30/43 MS: 1 ChangeBit- 00:11:53.550 [2024-04-24 10:07:06.530373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.550 [2024-04-24 10:07:06.530399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.550 #45 NEW cov: 11752 ft: 14365 corp: 12/331b lim: 50 exec/s: 0 rss: 68Mb L: 18/43 MS: 5 ChangeBit-CopyPart-ShuffleBytes-CopyPart-CrossOver- 00:11:53.550 [2024-04-24 10:07:06.570460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.550 [2024-04-24 10:07:06.570487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.550 #46 NEW cov: 11752 ft: 14400 corp: 13/343b lim: 50 exec/s: 0 rss: 68Mb L: 12/43 MS: 1 CrossOver- 00:11:53.550 [2024-04-24 10:07:06.611038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.551 [2024-04-24 10:07:06.611067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.611116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.551 [2024-04-24 10:07:06.611131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.611182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.551 [2024-04-24 10:07:06.611198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.611252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:53.551 [2024-04-24 10:07:06.611266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:53.551 #47 NEW cov: 11752 ft: 14478 corp: 14/385b lim: 50 exec/s: 0 rss: 68Mb L: 42/43 MS: 1 ShuffleBytes- 00:11:53.551 [2024-04-24 10:07:06.650962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.551 [2024-04-24 10:07:06.650988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.651027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.551 [2024-04-24 10:07:06.651042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.651097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.551 [2024-04-24 10:07:06.651112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.551 #48 NEW cov: 11752 ft: 14529 corp: 15/415b lim: 50 exec/s: 0 rss: 68Mb L: 30/43 MS: 1 ChangeByte- 00:11:53.551 [2024-04-24 10:07:06.691234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.551 [2024-04-24 10:07:06.691259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.691305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.551 [2024-04-24 10:07:06.691320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.691373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.551 [2024-04-24 10:07:06.691388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.691438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:53.551 [2024-04-24 10:07:06.691453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:53.551 #49 NEW cov: 11752 ft: 14580 corp: 16/458b lim: 50 exec/s: 0 rss: 68Mb L: 43/43 MS: 1 CrossOver- 00:11:53.551 [2024-04-24 10:07:06.731381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.551 [2024-04-24 10:07:06.731407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.731471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.551 [2024-04-24 10:07:06.731487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.731539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.551 [2024-04-24 10:07:06.731554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.731606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:53.551 [2024-04-24 10:07:06.731622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:53.551 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:53.551 #50 NEW cov: 11775 ft: 14650 corp: 17/503b lim: 50 exec/s: 0 rss: 69Mb L: 45/45 MS: 1 CMP- DE: "\017\000"- 00:11:53.551 [2024-04-24 10:07:06.771378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.551 [2024-04-24 10:07:06.771404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.771455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.551 [2024-04-24 10:07:06.771472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.771524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.551 [2024-04-24 10:07:06.771542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.551 #51 NEW cov: 11775 ft: 14678 corp: 18/533b lim: 50 exec/s: 0 rss: 69Mb L: 30/45 MS: 1 ChangeByte- 00:11:53.551 [2024-04-24 10:07:06.811487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.551 [2024-04-24 10:07:06.811516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.811565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.551 [2024-04-24 10:07:06.811580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.551 [2024-04-24 10:07:06.811632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.551 [2024-04-24 10:07:06.811647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.829 #52 NEW cov: 11775 ft: 14712 corp: 19/565b lim: 50 exec/s: 52 rss: 69Mb L: 32/45 MS: 1 CopyPart- 00:11:53.829 [2024-04-24 10:07:06.861750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.829 [2024-04-24 10:07:06.861778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:06.861830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.829 [2024-04-24 10:07:06.861846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:06.861897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.829 [2024-04-24 10:07:06.861912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:06.861965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:53.829 [2024-04-24 10:07:06.861980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:53.829 #58 NEW cov: 11775 ft: 14730 corp: 20/608b lim: 50 exec/s: 58 rss: 69Mb L: 43/45 MS: 1 ShuffleBytes- 00:11:53.829 [2024-04-24 10:07:06.901413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.829 [2024-04-24 10:07:06.901439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.829 #59 NEW cov: 11775 ft: 14739 corp: 21/620b lim: 50 exec/s: 59 rss: 69Mb L: 12/45 MS: 1 ShuffleBytes- 00:11:53.829 [2024-04-24 10:07:06.941809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.829 [2024-04-24 10:07:06.941836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:06.941870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.829 [2024-04-24 10:07:06.941886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:06.941938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.829 [2024-04-24 10:07:06.941953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.829 #60 NEW cov: 11775 ft: 14768 corp: 22/652b lim: 50 exec/s: 60 rss: 69Mb L: 32/45 MS: 1 ChangeBinInt- 00:11:53.829 [2024-04-24 10:07:06.982105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.829 [2024-04-24 10:07:06.982135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:06.982171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.829 [2024-04-24 10:07:06.982186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:06.982236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.829 [2024-04-24 10:07:06.982251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:06.982303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:53.829 [2024-04-24 10:07:06.982317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:53.829 #61 NEW cov: 11775 ft: 14841 corp: 23/695b lim: 50 exec/s: 61 rss: 69Mb L: 43/45 MS: 1 ChangeBit- 00:11:53.829 [2024-04-24 10:07:07.022159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.829 [2024-04-24 10:07:07.022184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:07.022234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.829 [2024-04-24 10:07:07.022250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:07.022299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.829 [2024-04-24 10:07:07.022314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:07.022365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:53.829 [2024-04-24 10:07:07.022380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:53.829 #62 NEW cov: 11775 ft: 14852 corp: 24/738b lim: 50 exec/s: 62 rss: 69Mb L: 43/45 MS: 1 PersAutoDict- DE: "\017\000"- 00:11:53.829 [2024-04-24 10:07:07.062151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.829 [2024-04-24 10:07:07.062177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:07.062228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.829 [2024-04-24 10:07:07.062243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:07.062297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.829 [2024-04-24 10:07:07.062312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.829 #63 NEW cov: 11775 ft: 14874 corp: 25/776b lim: 50 exec/s: 63 rss: 69Mb L: 38/45 MS: 1 InsertRepeatedBytes- 00:11:53.829 [2024-04-24 10:07:07.102642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:53.829 [2024-04-24 10:07:07.102670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:07.102718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:53.829 [2024-04-24 10:07:07.102734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:07.102785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:53.829 [2024-04-24 10:07:07.102804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:07.102855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:53.829 [2024-04-24 10:07:07.102870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:53.829 [2024-04-24 10:07:07.102923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:11:53.829 [2024-04-24 10:07:07.102937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:54.088 #64 NEW cov: 11775 ft: 14924 corp: 26/826b lim: 50 exec/s: 64 rss: 69Mb L: 50/50 MS: 1 CrossOver- 00:11:54.088 [2024-04-24 10:07:07.142434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.088 [2024-04-24 10:07:07.142460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.088 [2024-04-24 10:07:07.142495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:54.088 [2024-04-24 10:07:07.142510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:54.088 [2024-04-24 10:07:07.142563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:54.088 [2024-04-24 10:07:07.142577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:54.088 #65 NEW cov: 11775 ft: 14952 corp: 27/856b lim: 50 exec/s: 65 rss: 70Mb L: 30/50 MS: 1 ChangeBinInt- 00:11:54.088 [2024-04-24 10:07:07.182603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.088 [2024-04-24 10:07:07.182630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.088 [2024-04-24 10:07:07.182671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:54.088 [2024-04-24 10:07:07.182686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:54.088 [2024-04-24 10:07:07.182755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:54.088 [2024-04-24 10:07:07.182769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:54.088 [2024-04-24 10:07:07.182823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:54.088 [2024-04-24 10:07:07.182838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:54.088 #69 NEW cov: 11775 ft: 14960 corp: 28/899b lim: 50 exec/s: 69 rss: 70Mb L: 43/50 MS: 4 ShuffleBytes-CrossOver-ShuffleBytes-CrossOver- 00:11:54.088 [2024-04-24 10:07:07.212714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.088 [2024-04-24 10:07:07.212741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.088 [2024-04-24 10:07:07.212778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:54.088 [2024-04-24 10:07:07.212794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:54.088 [2024-04-24 10:07:07.212848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:54.088 [2024-04-24 10:07:07.212862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:54.088 [2024-04-24 10:07:07.212919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:54.088 [2024-04-24 10:07:07.212934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:54.088 #70 NEW cov: 11775 ft: 14970 corp: 29/943b lim: 50 exec/s: 70 rss: 70Mb L: 44/50 MS: 1 InsertRepeatedBytes- 00:11:54.088 [2024-04-24 10:07:07.252673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.089 [2024-04-24 10:07:07.252698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.089 [2024-04-24 10:07:07.252745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:54.089 [2024-04-24 10:07:07.252759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:54.089 [2024-04-24 10:07:07.252812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:54.089 [2024-04-24 10:07:07.252827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:54.089 #71 NEW cov: 11775 ft: 14987 corp: 30/973b lim: 50 exec/s: 71 rss: 70Mb L: 30/50 MS: 1 CopyPart- 00:11:54.089 [2024-04-24 10:07:07.292502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.089 [2024-04-24 10:07:07.292528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.089 #72 NEW cov: 11775 ft: 15004 corp: 31/985b lim: 50 exec/s: 72 rss: 70Mb L: 12/50 MS: 1 ShuffleBytes- 00:11:54.089 [2024-04-24 10:07:07.333011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.089 [2024-04-24 10:07:07.333037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.089 [2024-04-24 10:07:07.333103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:54.089 [2024-04-24 10:07:07.333118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:54.089 [2024-04-24 10:07:07.333172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:54.089 [2024-04-24 10:07:07.333187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:54.089 [2024-04-24 10:07:07.333247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:54.089 [2024-04-24 10:07:07.333262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:54.089 #73 NEW cov: 11775 ft: 15012 corp: 32/1028b lim: 50 exec/s: 73 rss: 70Mb L: 43/50 MS: 1 CopyPart- 00:11:54.347 [2024-04-24 10:07:07.373179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.347 [2024-04-24 10:07:07.373206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.347 [2024-04-24 10:07:07.373251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:54.347 [2024-04-24 10:07:07.373267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:54.347 [2024-04-24 10:07:07.373319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:54.347 [2024-04-24 10:07:07.373334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:54.348 [2024-04-24 10:07:07.373363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:54.348 [2024-04-24 10:07:07.373380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:54.348 #74 NEW cov: 11775 ft: 15048 corp: 33/1071b lim: 50 exec/s: 74 rss: 70Mb L: 43/50 MS: 1 PersAutoDict- DE: "\017\000"- 00:11:54.348 [2024-04-24 10:07:07.412988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.348 [2024-04-24 10:07:07.413014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.348 [2024-04-24 10:07:07.413052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:54.348 [2024-04-24 10:07:07.413072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:54.348 #75 NEW cov: 11775 ft: 15107 corp: 34/1098b lim: 50 exec/s: 75 rss: 70Mb L: 27/50 MS: 1 EraseBytes- 00:11:54.348 [2024-04-24 10:07:07.452992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.348 [2024-04-24 10:07:07.453017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.348 #76 NEW cov: 11775 ft: 15138 corp: 35/1115b lim: 50 exec/s: 76 rss: 70Mb L: 17/50 MS: 1 EraseBytes- 00:11:54.348 [2024-04-24 10:07:07.493471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.348 [2024-04-24 10:07:07.493497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.348 [2024-04-24 10:07:07.493560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:54.348 [2024-04-24 10:07:07.493577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:54.348 [2024-04-24 10:07:07.493628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:54.348 [2024-04-24 10:07:07.493644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:54.348 [2024-04-24 10:07:07.493697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:54.348 [2024-04-24 10:07:07.493712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:54.348 #77 NEW cov: 11775 ft: 15179 corp: 36/1162b lim: 50 exec/s: 77 rss: 70Mb L: 47/50 MS: 1 InsertRepeatedBytes- 00:11:54.348 [2024-04-24 10:07:07.533200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.348 [2024-04-24 10:07:07.533226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.348 #78 NEW cov: 11775 ft: 15187 corp: 37/1172b lim: 50 exec/s: 78 rss: 70Mb L: 10/50 MS: 1 InsertRepeatedBytes- 00:11:54.348 [2024-04-24 10:07:07.573450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.348 [2024-04-24 10:07:07.573476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.348 [2024-04-24 10:07:07.573529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:54.348 [2024-04-24 10:07:07.573544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:54.348 #79 NEW cov: 11775 ft: 15197 corp: 38/1200b lim: 50 exec/s: 79 rss: 70Mb L: 28/50 MS: 1 ChangeByte- 00:11:54.348 [2024-04-24 10:07:07.613413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.348 [2024-04-24 10:07:07.613440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.608 #80 NEW cov: 11775 ft: 15263 corp: 39/1212b lim: 50 exec/s: 80 rss: 71Mb L: 12/50 MS: 1 EraseBytes- 00:11:54.608 [2024-04-24 10:07:07.663822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.608 [2024-04-24 10:07:07.663848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.608 [2024-04-24 10:07:07.663900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:54.608 [2024-04-24 10:07:07.663916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:54.608 [2024-04-24 10:07:07.663969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:54.608 [2024-04-24 10:07:07.663983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:54.608 #81 NEW cov: 11775 ft: 15287 corp: 40/1242b lim: 50 exec/s: 81 rss: 71Mb L: 30/50 MS: 1 InsertRepeatedBytes- 00:11:54.608 [2024-04-24 10:07:07.703895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.608 [2024-04-24 10:07:07.703921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.608 [2024-04-24 10:07:07.703958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:54.608 [2024-04-24 10:07:07.703974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:54.608 [2024-04-24 10:07:07.704030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:54.608 [2024-04-24 10:07:07.704044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:54.608 #82 NEW cov: 11775 ft: 15300 corp: 41/1272b lim: 50 exec/s: 82 rss: 71Mb L: 30/50 MS: 1 PersAutoDict- DE: "\017\000"- 00:11:54.608 [2024-04-24 10:07:07.744193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.608 [2024-04-24 10:07:07.744220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.608 [2024-04-24 10:07:07.744262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:54.608 [2024-04-24 10:07:07.744278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:54.608 [2024-04-24 10:07:07.744330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:11:54.608 [2024-04-24 10:07:07.744344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:54.608 [2024-04-24 10:07:07.744396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:11:54.608 [2024-04-24 10:07:07.744411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:54.608 #83 NEW cov: 11775 ft: 15313 corp: 42/1319b lim: 50 exec/s: 83 rss: 71Mb L: 47/50 MS: 1 PersAutoDict- DE: "\017\000"- 00:11:54.608 [2024-04-24 10:07:07.784044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.608 [2024-04-24 10:07:07.784076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.608 [2024-04-24 10:07:07.784127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:11:54.608 [2024-04-24 10:07:07.784144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:54.608 #84 NEW cov: 11775 ft: 15318 corp: 43/1347b lim: 50 exec/s: 84 rss: 71Mb L: 28/50 MS: 1 ShuffleBytes- 00:11:54.608 [2024-04-24 10:07:07.824035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:11:54.608 [2024-04-24 10:07:07.824066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:54.608 #85 NEW cov: 11775 ft: 15383 corp: 44/1366b lim: 50 exec/s: 42 rss: 71Mb L: 19/50 MS: 1 ShuffleBytes- 00:11:54.608 #85 DONE cov: 11775 ft: 15383 corp: 44/1366b lim: 50 exec/s: 42 rss: 71Mb 00:11:54.608 ###### Recommended dictionary. ###### 00:11:54.608 "\017\000" # Uses: 4 00:11:54.608 ###### End of recommended dictionary. ###### 00:11:54.608 Done 85 runs in 2 second(s) 00:11:54.868 10:07:07 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_21.conf 00:11:54.868 10:07:07 -- ../common.sh@72 -- # (( i++ )) 00:11:54.868 10:07:07 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:54.868 10:07:07 -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:11:54.868 10:07:07 -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:11:54.868 10:07:07 -- nvmf/run.sh@24 -- # local timen=1 00:11:54.868 10:07:07 -- nvmf/run.sh@25 -- # local core=0x1 00:11:54.868 10:07:07 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:11:54.868 10:07:07 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:11:54.868 10:07:07 -- nvmf/run.sh@29 -- # printf %02d 22 00:11:54.868 10:07:07 -- nvmf/run.sh@29 -- # port=4422 00:11:54.868 10:07:07 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:11:54.868 10:07:08 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:11:54.868 10:07:08 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:54.868 10:07:08 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 -r /var/tmp/spdk22.sock 00:11:54.868 [2024-04-24 10:07:08.036880] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:54.868 [2024-04-24 10:07:08.036973] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175926 ] 00:11:54.868 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.127 [2024-04-24 10:07:08.349666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.384 [2024-04-24 10:07:08.435965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:55.384 [2024-04-24 10:07:08.436115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.384 [2024-04-24 10:07:08.494702] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.384 [2024-04-24 10:07:08.510885] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:11:55.384 INFO: Running with entropic power schedule (0xFF, 100). 00:11:55.384 INFO: Seed: 43110751 00:11:55.384 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:55.384 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:55.384 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:11:55.384 INFO: A corpus is not provided, starting from an empty corpus 00:11:55.384 #2 INITED exec/s: 0 rss: 61Mb 00:11:55.384 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:55.384 This may also happen if the target rejected all inputs we tried so far 00:11:55.384 [2024-04-24 10:07:08.566426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:55.384 [2024-04-24 10:07:08.566458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:55.384 [2024-04-24 10:07:08.566499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:55.384 [2024-04-24 10:07:08.566519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:55.384 [2024-04-24 10:07:08.566573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:55.384 [2024-04-24 10:07:08.566589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:55.642 NEW_FUNC[1/662]: 0x4a71d0 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:11:55.642 NEW_FUNC[2/662]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:55.642 #15 NEW cov: 11546 ft: 11551 corp: 2/58b lim: 85 exec/s: 0 rss: 67Mb L: 57/57 MS: 3 ChangeBit-ChangeBit-InsertRepeatedBytes- 00:11:55.642 [2024-04-24 10:07:08.907142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:55.642 [2024-04-24 10:07:08.907213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:55.900 NEW_FUNC[1/3]: 0x1533be0 in nvme_ctrlr_process_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3789 00:11:55.900 NEW_FUNC[2/3]: 0x1701fc0 in spdk_nvme_probe_poll_async /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme.c:1507 00:11:55.900 #18 NEW cov: 11687 ft: 13057 corp: 3/89b lim: 85 exec/s: 0 rss: 68Mb L: 31/57 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:11:55.900 [2024-04-24 10:07:08.957127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:55.900 [2024-04-24 10:07:08.957166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:55.900 #19 NEW cov: 11693 ft: 13257 corp: 4/120b lim: 85 exec/s: 0 rss: 68Mb L: 31/57 MS: 1 CopyPart- 00:11:55.900 [2024-04-24 10:07:08.997492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:55.900 [2024-04-24 10:07:08.997522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:55.900 [2024-04-24 10:07:08.997567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:55.900 [2024-04-24 10:07:08.997583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:55.900 [2024-04-24 10:07:08.997640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:55.900 [2024-04-24 10:07:08.997655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:55.900 #24 NEW cov: 11778 ft: 13613 corp: 5/183b lim: 85 exec/s: 0 rss: 68Mb L: 63/63 MS: 5 ShuffleBytes-ChangeBit-ChangeBit-CopyPart-InsertRepeatedBytes- 00:11:55.900 [2024-04-24 10:07:09.037305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:55.900 [2024-04-24 10:07:09.037333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:55.900 #25 NEW cov: 11778 ft: 13697 corp: 6/214b lim: 85 exec/s: 0 rss: 68Mb L: 31/63 MS: 1 CopyPart- 00:11:55.900 [2024-04-24 10:07:09.077409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:55.900 [2024-04-24 10:07:09.077437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:55.900 #26 NEW cov: 11778 ft: 13761 corp: 7/246b lim: 85 exec/s: 0 rss: 68Mb L: 32/63 MS: 1 InsertByte- 00:11:55.900 [2024-04-24 10:07:09.127934] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:55.900 [2024-04-24 10:07:09.127963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:55.900 [2024-04-24 10:07:09.128014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:55.900 [2024-04-24 10:07:09.128031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:55.900 [2024-04-24 10:07:09.128094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:55.900 [2024-04-24 10:07:09.128110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:55.900 #27 NEW cov: 11778 ft: 13832 corp: 8/310b lim: 85 exec/s: 0 rss: 68Mb L: 64/64 MS: 1 InsertByte- 00:11:55.900 [2024-04-24 10:07:09.167670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:55.900 [2024-04-24 10:07:09.167700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.159 #28 NEW cov: 11778 ft: 13857 corp: 9/341b lim: 85 exec/s: 0 rss: 68Mb L: 31/64 MS: 1 ChangeBinInt- 00:11:56.159 [2024-04-24 10:07:09.207808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.159 [2024-04-24 10:07:09.207837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.159 #29 NEW cov: 11778 ft: 13882 corp: 10/372b lim: 85 exec/s: 0 rss: 68Mb L: 31/64 MS: 1 ChangeBit- 00:11:56.159 [2024-04-24 10:07:09.247951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.159 [2024-04-24 10:07:09.247981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.159 #30 NEW cov: 11778 ft: 14031 corp: 11/403b lim: 85 exec/s: 0 rss: 68Mb L: 31/64 MS: 1 CopyPart- 00:11:56.159 [2024-04-24 10:07:09.288538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.159 [2024-04-24 10:07:09.288567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.159 [2024-04-24 10:07:09.288612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:56.159 [2024-04-24 10:07:09.288627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:56.159 [2024-04-24 10:07:09.288684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:56.159 [2024-04-24 10:07:09.288700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:56.159 [2024-04-24 10:07:09.288758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:11:56.159 [2024-04-24 10:07:09.288774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:56.159 #31 NEW cov: 11778 ft: 14403 corp: 12/481b lim: 85 exec/s: 0 rss: 68Mb L: 78/78 MS: 1 InsertRepeatedBytes- 00:11:56.159 [2024-04-24 10:07:09.338203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.159 [2024-04-24 10:07:09.338230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.159 #32 NEW cov: 11778 ft: 14462 corp: 13/511b lim: 85 exec/s: 0 rss: 68Mb L: 30/78 MS: 1 EraseBytes- 00:11:56.159 [2024-04-24 10:07:09.378579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.159 [2024-04-24 10:07:09.378606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.159 [2024-04-24 10:07:09.378645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:56.159 [2024-04-24 10:07:09.378662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:56.159 [2024-04-24 10:07:09.378719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:56.159 [2024-04-24 10:07:09.378735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:56.159 #33 NEW cov: 11778 ft: 14488 corp: 14/573b lim: 85 exec/s: 0 rss: 68Mb L: 62/78 MS: 1 CopyPart- 00:11:56.159 [2024-04-24 10:07:09.418767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.159 [2024-04-24 10:07:09.418794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.159 [2024-04-24 10:07:09.418834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:56.159 [2024-04-24 10:07:09.418850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:56.159 [2024-04-24 10:07:09.418924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:56.159 [2024-04-24 10:07:09.418938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:56.417 #34 NEW cov: 11778 ft: 14523 corp: 15/631b lim: 85 exec/s: 0 rss: 68Mb L: 58/78 MS: 1 CrossOver- 00:11:56.417 [2024-04-24 10:07:09.458565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.417 [2024-04-24 10:07:09.458593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.417 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:56.417 #35 NEW cov: 11801 ft: 14557 corp: 16/655b lim: 85 exec/s: 0 rss: 69Mb L: 24/78 MS: 1 EraseBytes- 00:11:56.417 [2024-04-24 10:07:09.508713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.417 [2024-04-24 10:07:09.508741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.417 #36 NEW cov: 11801 ft: 14584 corp: 17/686b lim: 85 exec/s: 0 rss: 69Mb L: 31/78 MS: 1 CrossOver- 00:11:56.417 [2024-04-24 10:07:09.548851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.417 [2024-04-24 10:07:09.548881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.417 #37 NEW cov: 11801 ft: 14600 corp: 18/718b lim: 85 exec/s: 37 rss: 69Mb L: 32/78 MS: 1 InsertByte- 00:11:56.417 [2024-04-24 10:07:09.589289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.417 [2024-04-24 10:07:09.589316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.417 [2024-04-24 10:07:09.589373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:56.417 [2024-04-24 10:07:09.589390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:56.417 [2024-04-24 10:07:09.589448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:56.417 [2024-04-24 10:07:09.589464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:56.417 #38 NEW cov: 11801 ft: 14623 corp: 19/776b lim: 85 exec/s: 38 rss: 69Mb L: 58/78 MS: 1 InsertByte- 00:11:56.417 [2024-04-24 10:07:09.629557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.417 [2024-04-24 10:07:09.629584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.417 [2024-04-24 10:07:09.629647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:56.417 [2024-04-24 10:07:09.629667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:56.417 [2024-04-24 10:07:09.629722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:56.417 [2024-04-24 10:07:09.629739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:56.417 [2024-04-24 10:07:09.629795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:11:56.417 [2024-04-24 10:07:09.629811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:56.417 #41 NEW cov: 11801 ft: 14650 corp: 20/859b lim: 85 exec/s: 41 rss: 69Mb L: 83/83 MS: 3 ChangeBit-ChangeByte-InsertRepeatedBytes- 00:11:56.417 [2024-04-24 10:07:09.669167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.417 [2024-04-24 10:07:09.669194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.417 #42 NEW cov: 11801 ft: 14664 corp: 21/891b lim: 85 exec/s: 42 rss: 69Mb L: 32/83 MS: 1 ChangeBinInt- 00:11:56.674 [2024-04-24 10:07:09.709302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.675 [2024-04-24 10:07:09.709330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.675 #48 NEW cov: 11801 ft: 14685 corp: 22/915b lim: 85 exec/s: 48 rss: 69Mb L: 24/83 MS: 1 ChangeByte- 00:11:56.675 [2024-04-24 10:07:09.749439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.675 [2024-04-24 10:07:09.749466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.675 #49 NEW cov: 11801 ft: 14694 corp: 23/940b lim: 85 exec/s: 49 rss: 69Mb L: 25/83 MS: 1 InsertByte- 00:11:56.675 [2024-04-24 10:07:09.789785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.675 [2024-04-24 10:07:09.789813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.675 [2024-04-24 10:07:09.789867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:56.675 [2024-04-24 10:07:09.789884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:56.675 [2024-04-24 10:07:09.789942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:56.675 [2024-04-24 10:07:09.789959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:56.675 #50 NEW cov: 11801 ft: 14706 corp: 24/1003b lim: 85 exec/s: 50 rss: 69Mb L: 63/83 MS: 1 InsertByte- 00:11:56.675 [2024-04-24 10:07:09.829929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.675 [2024-04-24 10:07:09.829957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.675 [2024-04-24 10:07:09.829995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:56.675 [2024-04-24 10:07:09.830011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:56.675 [2024-04-24 10:07:09.830071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:56.675 [2024-04-24 10:07:09.830104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:56.675 #51 NEW cov: 11801 ft: 14710 corp: 25/1060b lim: 85 exec/s: 51 rss: 69Mb L: 57/83 MS: 1 CrossOver- 00:11:56.675 [2024-04-24 10:07:09.870046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.675 [2024-04-24 10:07:09.870077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.675 [2024-04-24 10:07:09.870128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:56.675 [2024-04-24 10:07:09.870144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:56.675 [2024-04-24 10:07:09.870202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:56.675 [2024-04-24 10:07:09.870218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:56.675 #52 NEW cov: 11801 ft: 14720 corp: 26/1115b lim: 85 exec/s: 52 rss: 69Mb L: 55/83 MS: 1 CrossOver- 00:11:56.675 [2024-04-24 10:07:09.909950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.675 [2024-04-24 10:07:09.909975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.675 [2024-04-24 10:07:09.950012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.675 [2024-04-24 10:07:09.950040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.933 #54 NEW cov: 11801 ft: 14735 corp: 27/1147b lim: 85 exec/s: 54 rss: 69Mb L: 32/83 MS: 2 InsertByte-CMP- DE: "\001\000\000\006"- 00:11:56.933 [2024-04-24 10:07:09.990270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.933 [2024-04-24 10:07:09.990298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.933 [2024-04-24 10:07:09.990354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:56.933 [2024-04-24 10:07:09.990370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:56.933 #55 NEW cov: 11801 ft: 15048 corp: 28/1196b lim: 85 exec/s: 55 rss: 70Mb L: 49/83 MS: 1 CrossOver- 00:11:56.933 [2024-04-24 10:07:10.040282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.933 [2024-04-24 10:07:10.040318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.933 #56 NEW cov: 11801 ft: 15061 corp: 29/1228b lim: 85 exec/s: 56 rss: 70Mb L: 32/83 MS: 1 CopyPart- 00:11:56.933 [2024-04-24 10:07:10.080381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.933 [2024-04-24 10:07:10.080420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.933 #57 NEW cov: 11801 ft: 15065 corp: 30/1259b lim: 85 exec/s: 57 rss: 70Mb L: 31/83 MS: 1 ChangeByte- 00:11:56.933 [2024-04-24 10:07:10.120478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.933 [2024-04-24 10:07:10.120508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.933 #58 NEW cov: 11801 ft: 15104 corp: 31/1289b lim: 85 exec/s: 58 rss: 70Mb L: 30/83 MS: 1 ChangeBinInt- 00:11:56.933 [2024-04-24 10:07:10.170667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.933 [2024-04-24 10:07:10.170696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:56.933 #59 NEW cov: 11801 ft: 15135 corp: 32/1320b lim: 85 exec/s: 59 rss: 70Mb L: 31/83 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:11:56.933 [2024-04-24 10:07:10.210769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:56.933 [2024-04-24 10:07:10.210798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:57.191 #60 NEW cov: 11801 ft: 15150 corp: 33/1351b lim: 85 exec/s: 60 rss: 70Mb L: 31/83 MS: 1 ChangeBinInt- 00:11:57.191 [2024-04-24 10:07:10.251053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:57.191 [2024-04-24 10:07:10.251088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:57.191 [2024-04-24 10:07:10.251148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:57.191 [2024-04-24 10:07:10.251166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:57.191 #61 NEW cov: 11801 ft: 15158 corp: 34/1385b lim: 85 exec/s: 61 rss: 70Mb L: 34/83 MS: 1 PersAutoDict- DE: "\001\000\000\006"- 00:11:57.191 [2024-04-24 10:07:10.291007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:57.191 [2024-04-24 10:07:10.291036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:57.191 #62 NEW cov: 11801 ft: 15176 corp: 35/1417b lim: 85 exec/s: 62 rss: 70Mb L: 32/83 MS: 1 ChangeByte- 00:11:57.191 [2024-04-24 10:07:10.341707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:57.191 [2024-04-24 10:07:10.341736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:57.191 [2024-04-24 10:07:10.341780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:57.191 [2024-04-24 10:07:10.341796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:57.191 [2024-04-24 10:07:10.341854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:57.191 [2024-04-24 10:07:10.341871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:57.191 [2024-04-24 10:07:10.341932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:11:57.191 [2024-04-24 10:07:10.341949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:57.191 #63 NEW cov: 11801 ft: 15181 corp: 36/1500b lim: 85 exec/s: 63 rss: 70Mb L: 83/83 MS: 1 ChangeByte- 00:11:57.191 [2024-04-24 10:07:10.391895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:57.191 [2024-04-24 10:07:10.391924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:57.191 [2024-04-24 10:07:10.391969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:57.191 [2024-04-24 10:07:10.391986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:57.191 [2024-04-24 10:07:10.392046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:57.191 [2024-04-24 10:07:10.392066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:57.191 [2024-04-24 10:07:10.392125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:11:57.191 [2024-04-24 10:07:10.392141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:57.191 #64 NEW cov: 11801 ft: 15196 corp: 37/1583b lim: 85 exec/s: 64 rss: 70Mb L: 83/83 MS: 1 ChangeBinInt- 00:11:57.191 [2024-04-24 10:07:10.441516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:57.191 [2024-04-24 10:07:10.441545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:57.450 #65 NEW cov: 11801 ft: 15201 corp: 38/1609b lim: 85 exec/s: 65 rss: 70Mb L: 26/83 MS: 1 InsertByte- 00:11:57.450 [2024-04-24 10:07:10.491986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:57.450 [2024-04-24 10:07:10.492016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:57.450 [2024-04-24 10:07:10.492067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:11:57.450 [2024-04-24 10:07:10.492085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:57.450 [2024-04-24 10:07:10.492144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:11:57.450 [2024-04-24 10:07:10.492161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:57.450 #66 NEW cov: 11801 ft: 15212 corp: 39/1667b lim: 85 exec/s: 66 rss: 70Mb L: 58/83 MS: 1 ShuffleBytes- 00:11:57.450 [2024-04-24 10:07:10.531700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:11:57.450 [2024-04-24 10:07:10.531728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:57.450 #67 NEW cov: 11801 ft: 15256 corp: 40/1698b lim: 85 exec/s: 33 rss: 70Mb L: 31/83 MS: 1 PersAutoDict- DE: "\001\000\000\006"- 00:11:57.450 #67 DONE cov: 11801 ft: 15256 corp: 40/1698b lim: 85 exec/s: 33 rss: 70Mb 00:11:57.450 ###### Recommended dictionary. ###### 00:11:57.450 "\001\000\000\006" # Uses: 2 00:11:57.450 "\377\377\377\377\377\377\377\377" # Uses: 0 00:11:57.450 ###### End of recommended dictionary. ###### 00:11:57.450 Done 67 runs in 2 second(s) 00:11:57.450 10:07:10 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_22.conf 00:11:57.450 10:07:10 -- ../common.sh@72 -- # (( i++ )) 00:11:57.450 10:07:10 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:57.450 10:07:10 -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:11:57.450 10:07:10 -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:11:57.450 10:07:10 -- nvmf/run.sh@24 -- # local timen=1 00:11:57.450 10:07:10 -- nvmf/run.sh@25 -- # local core=0x1 00:11:57.450 10:07:10 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:11:57.450 10:07:10 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:11:57.450 10:07:10 -- nvmf/run.sh@29 -- # printf %02d 23 00:11:57.450 10:07:10 -- nvmf/run.sh@29 -- # port=4423 00:11:57.450 10:07:10 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:11:57.450 10:07:10 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:11:57.450 10:07:10 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:57.450 10:07:10 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 -r /var/tmp/spdk23.sock 00:11:57.709 [2024-04-24 10:07:10.747778] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:57.709 [2024-04-24 10:07:10.747854] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1176294 ] 00:11:57.709 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.967 [2024-04-24 10:07:11.064213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.967 [2024-04-24 10:07:11.150770] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:57.968 [2024-04-24 10:07:11.150928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.968 [2024-04-24 10:07:11.209388] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.968 [2024-04-24 10:07:11.225575] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:11:57.968 INFO: Running with entropic power schedule (0xFF, 100). 00:11:57.968 INFO: Seed: 2757113013 00:11:58.226 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:11:58.226 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:11:58.226 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:11:58.226 INFO: A corpus is not provided, starting from an empty corpus 00:11:58.226 #2 INITED exec/s: 0 rss: 61Mb 00:11:58.226 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:58.226 This may also happen if the target rejected all inputs we tried so far 00:11:58.226 [2024-04-24 10:07:11.280379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:58.226 [2024-04-24 10:07:11.280416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:58.226 [2024-04-24 10:07:11.280454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:58.226 [2024-04-24 10:07:11.280473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:58.485 NEW_FUNC[1/664]: 0x4aa400 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:11:58.485 NEW_FUNC[2/664]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:58.485 #3 NEW cov: 11507 ft: 11505 corp: 2/14b lim: 25 exec/s: 0 rss: 67Mb L: 13/13 MS: 1 InsertRepeatedBytes- 00:11:58.485 [2024-04-24 10:07:11.621151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:58.485 [2024-04-24 10:07:11.621200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:58.485 [2024-04-24 10:07:11.621254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:58.485 [2024-04-24 10:07:11.621272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:58.485 #4 NEW cov: 11620 ft: 12012 corp: 3/27b lim: 25 exec/s: 0 rss: 68Mb L: 13/13 MS: 1 ChangeBit- 00:11:58.485 [2024-04-24 10:07:11.691301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:58.485 [2024-04-24 10:07:11.691338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:58.485 [2024-04-24 10:07:11.691375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:58.485 [2024-04-24 10:07:11.691396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:58.485 [2024-04-24 10:07:11.691427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:58.485 [2024-04-24 10:07:11.691445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:58.485 #5 NEW cov: 11626 ft: 12449 corp: 4/44b lim: 25 exec/s: 0 rss: 68Mb L: 17/17 MS: 1 CMP- DE: "\377\001\000\000"- 00:11:58.485 [2024-04-24 10:07:11.741399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:58.485 [2024-04-24 10:07:11.741432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:58.485 [2024-04-24 10:07:11.741482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:58.485 [2024-04-24 10:07:11.741504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:58.485 [2024-04-24 10:07:11.741536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:58.485 [2024-04-24 10:07:11.741553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:58.744 #6 NEW cov: 11711 ft: 12752 corp: 5/59b lim: 25 exec/s: 0 rss: 68Mb L: 15/17 MS: 1 CrossOver- 00:11:58.744 [2024-04-24 10:07:11.811558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:58.744 [2024-04-24 10:07:11.811590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:58.744 [2024-04-24 10:07:11.811641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:58.744 [2024-04-24 10:07:11.811659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:58.744 [2024-04-24 10:07:11.811690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:58.744 [2024-04-24 10:07:11.811706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:58.744 #7 NEW cov: 11711 ft: 12839 corp: 6/74b lim: 25 exec/s: 0 rss: 68Mb L: 15/17 MS: 1 ChangeByte- 00:11:58.744 [2024-04-24 10:07:11.871756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:58.744 [2024-04-24 10:07:11.871786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:58.744 [2024-04-24 10:07:11.871836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:58.744 [2024-04-24 10:07:11.871855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:58.744 [2024-04-24 10:07:11.871887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:58.744 [2024-04-24 10:07:11.871904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:58.744 #8 NEW cov: 11711 ft: 12936 corp: 7/90b lim: 25 exec/s: 0 rss: 68Mb L: 16/17 MS: 1 InsertByte- 00:11:58.744 [2024-04-24 10:07:11.921879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:58.744 [2024-04-24 10:07:11.921910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:58.744 [2024-04-24 10:07:11.921959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:58.744 [2024-04-24 10:07:11.921977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:58.744 [2024-04-24 10:07:11.922008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:58.744 [2024-04-24 10:07:11.922024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:58.744 #9 NEW cov: 11711 ft: 12998 corp: 8/105b lim: 25 exec/s: 0 rss: 68Mb L: 15/17 MS: 1 PersAutoDict- DE: "\377\001\000\000"- 00:11:58.744 [2024-04-24 10:07:11.982099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:58.744 [2024-04-24 10:07:11.982130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:58.744 [2024-04-24 10:07:11.982177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:58.744 [2024-04-24 10:07:11.982195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:58.744 [2024-04-24 10:07:11.982231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:58.744 [2024-04-24 10:07:11.982247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:58.744 [2024-04-24 10:07:11.982277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:11:58.744 [2024-04-24 10:07:11.982293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:58.744 [2024-04-24 10:07:11.982322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:11:58.744 [2024-04-24 10:07:11.982338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:59.003 #11 NEW cov: 11711 ft: 13499 corp: 9/130b lim: 25 exec/s: 0 rss: 68Mb L: 25/25 MS: 2 ChangeBit-InsertRepeatedBytes- 00:11:59.003 [2024-04-24 10:07:12.042312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.003 [2024-04-24 10:07:12.042343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.003 [2024-04-24 10:07:12.042376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.003 [2024-04-24 10:07:12.042394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.003 [2024-04-24 10:07:12.042425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:59.003 [2024-04-24 10:07:12.042441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:59.003 [2024-04-24 10:07:12.042469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:11:59.003 [2024-04-24 10:07:12.042485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:59.003 [2024-04-24 10:07:12.042515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:11:59.004 [2024-04-24 10:07:12.042531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:59.004 #12 NEW cov: 11711 ft: 13534 corp: 10/155b lim: 25 exec/s: 0 rss: 68Mb L: 25/25 MS: 1 CopyPart- 00:11:59.004 [2024-04-24 10:07:12.112348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.004 [2024-04-24 10:07:12.112380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.004 [2024-04-24 10:07:12.112415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.004 [2024-04-24 10:07:12.112433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.004 #13 NEW cov: 11711 ft: 13651 corp: 11/168b lim: 25 exec/s: 0 rss: 68Mb L: 13/25 MS: 1 ChangeBit- 00:11:59.004 [2024-04-24 10:07:12.162516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.004 [2024-04-24 10:07:12.162545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.004 [2024-04-24 10:07:12.162595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.004 [2024-04-24 10:07:12.162613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.004 [2024-04-24 10:07:12.162643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:59.004 [2024-04-24 10:07:12.162660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:59.004 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:11:59.004 #14 NEW cov: 11728 ft: 13676 corp: 12/186b lim: 25 exec/s: 0 rss: 68Mb L: 18/25 MS: 1 EraseBytes- 00:11:59.004 [2024-04-24 10:07:12.222671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.004 [2024-04-24 10:07:12.222700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.004 [2024-04-24 10:07:12.222749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.004 [2024-04-24 10:07:12.222768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.004 [2024-04-24 10:07:12.222798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:59.004 [2024-04-24 10:07:12.222815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:59.004 #15 NEW cov: 11728 ft: 13690 corp: 13/204b lim: 25 exec/s: 15 rss: 69Mb L: 18/25 MS: 1 CopyPart- 00:11:59.261 [2024-04-24 10:07:12.282830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.261 [2024-04-24 10:07:12.282875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.261 [2024-04-24 10:07:12.282909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.261 [2024-04-24 10:07:12.282927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.261 [2024-04-24 10:07:12.282959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:59.261 [2024-04-24 10:07:12.282975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:59.262 #16 NEW cov: 11728 ft: 13737 corp: 14/221b lim: 25 exec/s: 16 rss: 69Mb L: 17/25 MS: 1 PersAutoDict- DE: "\377\001\000\000"- 00:11:59.262 [2024-04-24 10:07:12.353134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.262 [2024-04-24 10:07:12.353165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.262 [2024-04-24 10:07:12.353214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.262 [2024-04-24 10:07:12.353233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.262 [2024-04-24 10:07:12.353265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:59.262 [2024-04-24 10:07:12.353282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:59.262 [2024-04-24 10:07:12.353311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:11:59.262 [2024-04-24 10:07:12.353328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:59.262 [2024-04-24 10:07:12.353357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:11:59.262 [2024-04-24 10:07:12.353374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:59.262 #17 NEW cov: 11728 ft: 13767 corp: 15/246b lim: 25 exec/s: 17 rss: 69Mb L: 25/25 MS: 1 PersAutoDict- DE: "\377\001\000\000"- 00:11:59.262 [2024-04-24 10:07:12.403143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.262 [2024-04-24 10:07:12.403173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.262 [2024-04-24 10:07:12.403236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.262 [2024-04-24 10:07:12.403253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.262 #18 NEW cov: 11728 ft: 13775 corp: 16/259b lim: 25 exec/s: 18 rss: 69Mb L: 13/25 MS: 1 EraseBytes- 00:11:59.262 [2024-04-24 10:07:12.453288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.262 [2024-04-24 10:07:12.453317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.262 [2024-04-24 10:07:12.453366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.262 [2024-04-24 10:07:12.453385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.262 [2024-04-24 10:07:12.453415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:59.262 [2024-04-24 10:07:12.453431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:59.262 #19 NEW cov: 11728 ft: 13825 corp: 17/274b lim: 25 exec/s: 19 rss: 69Mb L: 15/25 MS: 1 CopyPart- 00:11:59.262 [2024-04-24 10:07:12.513533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.262 [2024-04-24 10:07:12.513562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.262 [2024-04-24 10:07:12.513611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.262 [2024-04-24 10:07:12.513629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.262 [2024-04-24 10:07:12.513660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:59.262 [2024-04-24 10:07:12.513676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:59.262 [2024-04-24 10:07:12.513705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:11:59.262 [2024-04-24 10:07:12.513722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:59.262 [2024-04-24 10:07:12.513751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:11:59.262 [2024-04-24 10:07:12.513768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:59.520 #20 NEW cov: 11728 ft: 13908 corp: 18/299b lim: 25 exec/s: 20 rss: 69Mb L: 25/25 MS: 1 ChangeByte- 00:11:59.520 [2024-04-24 10:07:12.583636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.520 [2024-04-24 10:07:12.583665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.520 [2024-04-24 10:07:12.583715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.520 [2024-04-24 10:07:12.583733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.520 [2024-04-24 10:07:12.583764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:59.520 [2024-04-24 10:07:12.583780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:59.520 #21 NEW cov: 11728 ft: 13937 corp: 19/317b lim: 25 exec/s: 21 rss: 69Mb L: 18/25 MS: 1 CrossOver- 00:11:59.520 [2024-04-24 10:07:12.633883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.520 [2024-04-24 10:07:12.633917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.520 [2024-04-24 10:07:12.633952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.520 [2024-04-24 10:07:12.633969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.520 [2024-04-24 10:07:12.634000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:59.520 [2024-04-24 10:07:12.634016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:59.520 [2024-04-24 10:07:12.634044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:11:59.520 [2024-04-24 10:07:12.634068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:59.520 [2024-04-24 10:07:12.634099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:11:59.520 [2024-04-24 10:07:12.634115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:11:59.520 #22 NEW cov: 11728 ft: 13999 corp: 20/342b lim: 25 exec/s: 22 rss: 69Mb L: 25/25 MS: 1 ChangeByte- 00:11:59.521 [2024-04-24 10:07:12.693887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.521 [2024-04-24 10:07:12.693917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.521 [2024-04-24 10:07:12.693952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.521 [2024-04-24 10:07:12.693971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.521 #23 NEW cov: 11728 ft: 14052 corp: 21/355b lim: 25 exec/s: 23 rss: 69Mb L: 13/25 MS: 1 ChangeBit- 00:11:59.521 [2024-04-24 10:07:12.753996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.521 [2024-04-24 10:07:12.754026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.780 #24 NEW cov: 11728 ft: 14438 corp: 22/364b lim: 25 exec/s: 24 rss: 69Mb L: 9/25 MS: 1 CrossOver- 00:11:59.780 [2024-04-24 10:07:12.824190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.780 [2024-04-24 10:07:12.824230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.780 #28 NEW cov: 11728 ft: 14454 corp: 23/370b lim: 25 exec/s: 28 rss: 69Mb L: 6/25 MS: 4 ShuffleBytes-ShuffleBytes-InsertByte-PersAutoDict- DE: "\377\001\000\000"- 00:11:59.780 [2024-04-24 10:07:12.874270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.780 [2024-04-24 10:07:12.874299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.780 #29 NEW cov: 11728 ft: 14502 corp: 24/375b lim: 25 exec/s: 29 rss: 69Mb L: 5/25 MS: 1 CMP- DE: "\003\000\000\000"- 00:11:59.780 [2024-04-24 10:07:12.924593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.780 [2024-04-24 10:07:12.924624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.780 [2024-04-24 10:07:12.924658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.780 [2024-04-24 10:07:12.924675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.780 [2024-04-24 10:07:12.924710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:59.780 [2024-04-24 10:07:12.924726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:59.780 [2024-04-24 10:07:12.924755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:11:59.780 [2024-04-24 10:07:12.924772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:59.780 #30 NEW cov: 11728 ft: 14625 corp: 25/396b lim: 25 exec/s: 30 rss: 69Mb L: 21/25 MS: 1 InsertRepeatedBytes- 00:11:59.780 [2024-04-24 10:07:12.984664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.780 [2024-04-24 10:07:12.984694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.780 [2024-04-24 10:07:12.984729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.780 [2024-04-24 10:07:12.984747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.780 #31 NEW cov: 11728 ft: 14652 corp: 26/406b lim: 25 exec/s: 31 rss: 69Mb L: 10/25 MS: 1 EraseBytes- 00:11:59.780 [2024-04-24 10:07:13.034941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:11:59.780 [2024-04-24 10:07:13.034973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:11:59.780 [2024-04-24 10:07:13.035005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:11:59.780 [2024-04-24 10:07:13.035023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:11:59.780 [2024-04-24 10:07:13.035077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:11:59.780 [2024-04-24 10:07:13.035096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:11:59.780 [2024-04-24 10:07:13.035126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:11:59.780 [2024-04-24 10:07:13.035143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:11:59.780 [2024-04-24 10:07:13.035172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:11:59.780 [2024-04-24 10:07:13.035189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:00.039 #32 NEW cov: 11728 ft: 14665 corp: 27/431b lim: 25 exec/s: 32 rss: 69Mb L: 25/25 MS: 1 ShuffleBytes- 00:12:00.039 [2024-04-24 10:07:13.084885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:00.039 [2024-04-24 10:07:13.084917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:00.039 [2024-04-24 10:07:13.084967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:00.039 [2024-04-24 10:07:13.084985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:00.039 #33 NEW cov: 11728 ft: 14702 corp: 28/441b lim: 25 exec/s: 33 rss: 69Mb L: 10/25 MS: 1 EraseBytes- 00:12:00.039 [2024-04-24 10:07:13.145098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:00.039 [2024-04-24 10:07:13.145130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:00.039 [2024-04-24 10:07:13.145179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:00.039 [2024-04-24 10:07:13.145197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:00.039 [2024-04-24 10:07:13.145231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:00.039 [2024-04-24 10:07:13.145248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:00.039 #34 NEW cov: 11735 ft: 14714 corp: 29/456b lim: 25 exec/s: 34 rss: 70Mb L: 15/25 MS: 1 CopyPart- 00:12:00.039 [2024-04-24 10:07:13.205242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:00.039 [2024-04-24 10:07:13.205272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:00.039 [2024-04-24 10:07:13.205308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:00.039 [2024-04-24 10:07:13.205326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:00.039 #35 NEW cov: 11735 ft: 14735 corp: 30/466b lim: 25 exec/s: 35 rss: 70Mb L: 10/25 MS: 1 ShuffleBytes- 00:12:00.039 [2024-04-24 10:07:13.265371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:00.039 [2024-04-24 10:07:13.265401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:00.039 [2024-04-24 10:07:13.265451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:00.039 [2024-04-24 10:07:13.265469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:00.039 #36 NEW cov: 11735 ft: 14749 corp: 31/476b lim: 25 exec/s: 18 rss: 70Mb L: 10/25 MS: 1 ChangeByte- 00:12:00.039 #36 DONE cov: 11735 ft: 14749 corp: 31/476b lim: 25 exec/s: 18 rss: 70Mb 00:12:00.040 ###### Recommended dictionary. ###### 00:12:00.040 "\377\001\000\000" # Uses: 4 00:12:00.040 "\003\000\000\000" # Uses: 0 00:12:00.040 ###### End of recommended dictionary. ###### 00:12:00.040 Done 36 runs in 2 second(s) 00:12:00.299 10:07:13 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_23.conf 00:12:00.299 10:07:13 -- ../common.sh@72 -- # (( i++ )) 00:12:00.299 10:07:13 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:00.299 10:07:13 -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:12:00.299 10:07:13 -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:12:00.299 10:07:13 -- nvmf/run.sh@24 -- # local timen=1 00:12:00.299 10:07:13 -- nvmf/run.sh@25 -- # local core=0x1 00:12:00.299 10:07:13 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:12:00.299 10:07:13 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:12:00.299 10:07:13 -- nvmf/run.sh@29 -- # printf %02d 24 00:12:00.299 10:07:13 -- nvmf/run.sh@29 -- # port=4424 00:12:00.299 10:07:13 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:12:00.299 10:07:13 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:12:00.299 10:07:13 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:00.299 10:07:13 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 -r /var/tmp/spdk24.sock 00:12:00.299 [2024-04-24 10:07:13.481521] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:12:00.299 [2024-04-24 10:07:13.481593] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1176666 ] 00:12:00.299 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.558 [2024-04-24 10:07:13.795279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.817 [2024-04-24 10:07:13.889177] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:00.817 [2024-04-24 10:07:13.889312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.817 [2024-04-24 10:07:13.947857] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.817 [2024-04-24 10:07:13.964055] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:12:00.817 INFO: Running with entropic power schedule (0xFF, 100). 00:12:00.817 INFO: Seed: 1202147838 00:12:00.817 INFO: Loaded 1 modules (341207 inline 8-bit counters): 341207 [0x280674c, 0x2859c23), 00:12:00.817 INFO: Loaded 1 PC tables (341207 PCs): 341207 [0x2859c28,0x2d8e998), 00:12:00.817 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:12:00.817 INFO: A corpus is not provided, starting from an empty corpus 00:12:00.817 #2 INITED exec/s: 0 rss: 61Mb 00:12:00.817 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:00.817 This may also happen if the target rejected all inputs we tried so far 00:12:00.817 [2024-04-24 10:07:14.029603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.817 [2024-04-24 10:07:14.029636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:00.817 [2024-04-24 10:07:14.029679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.817 [2024-04-24 10:07:14.029695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:00.817 [2024-04-24 10:07:14.029744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.817 [2024-04-24 10:07:14.029759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:00.817 [2024-04-24 10:07:14.029810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.817 [2024-04-24 10:07:14.029824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:01.076 NEW_FUNC[1/665]: 0x4ab4e0 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:12:01.076 NEW_FUNC[2/665]: 0x4bc140 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:01.076 #50 NEW cov: 11579 ft: 11580 corp: 2/81b lim: 100 exec/s: 0 rss: 67Mb L: 80/80 MS: 3 CMP-ShuffleBytes-InsertRepeatedBytes- DE: "\000\000\000\000\000\000\000\000"- 00:12:01.335 [2024-04-24 10:07:14.360338] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.335 [2024-04-24 10:07:14.360392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.335 [2024-04-24 10:07:14.360472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.335 [2024-04-24 10:07:14.360496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.335 #52 NEW cov: 11692 ft: 12480 corp: 3/132b lim: 100 exec/s: 0 rss: 68Mb L: 51/80 MS: 2 ChangeByte-CrossOver- 00:12:01.335 [2024-04-24 10:07:14.400326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.335 [2024-04-24 10:07:14.400356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.335 [2024-04-24 10:07:14.400419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.335 [2024-04-24 10:07:14.400436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.335 #53 NEW cov: 11698 ft: 12782 corp: 4/183b lim: 100 exec/s: 0 rss: 68Mb L: 51/80 MS: 1 CrossOver- 00:12:01.335 [2024-04-24 10:07:14.440741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.335 [2024-04-24 10:07:14.440770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.335 [2024-04-24 10:07:14.440807] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.335 [2024-04-24 10:07:14.440823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.335 [2024-04-24 10:07:14.440877] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.335 [2024-04-24 10:07:14.440891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.335 [2024-04-24 10:07:14.440946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.335 [2024-04-24 10:07:14.440960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:01.335 #54 NEW cov: 11783 ft: 13100 corp: 5/263b lim: 100 exec/s: 0 rss: 68Mb L: 80/80 MS: 1 ShuffleBytes- 00:12:01.335 [2024-04-24 10:07:14.490716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.335 [2024-04-24 10:07:14.490744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.335 [2024-04-24 10:07:14.490801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.490818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.336 [2024-04-24 10:07:14.490855] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.490871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.336 #55 NEW cov: 11783 ft: 13502 corp: 6/341b lim: 100 exec/s: 0 rss: 68Mb L: 78/80 MS: 1 CopyPart- 00:12:01.336 [2024-04-24 10:07:14.531162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.531191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.336 [2024-04-24 10:07:14.531246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.531262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.336 [2024-04-24 10:07:14.531316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.531331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.336 [2024-04-24 10:07:14.531388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.531404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:01.336 [2024-04-24 10:07:14.531459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.531475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:01.336 #56 NEW cov: 11783 ft: 13574 corp: 7/441b lim: 100 exec/s: 0 rss: 68Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:12:01.336 [2024-04-24 10:07:14.571212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13256266026652776375 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.571241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.336 [2024-04-24 10:07:14.571289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.571303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.336 [2024-04-24 10:07:14.571357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.571373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.336 [2024-04-24 10:07:14.571428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.571442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:01.336 [2024-04-24 10:07:14.571495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.571511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:01.336 #57 NEW cov: 11783 ft: 13674 corp: 8/541b lim: 100 exec/s: 0 rss: 68Mb L: 100/100 MS: 1 ChangeBit- 00:12:01.336 [2024-04-24 10:07:14.611381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13256266026652776375 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.611409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.336 [2024-04-24 10:07:14.611462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.611477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.336 [2024-04-24 10:07:14.611531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.611547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.336 [2024-04-24 10:07:14.611602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.611618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:01.336 [2024-04-24 10:07:14.611673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.336 [2024-04-24 10:07:14.611691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:01.595 #58 NEW cov: 11783 ft: 13681 corp: 9/641b lim: 100 exec/s: 0 rss: 68Mb L: 100/100 MS: 1 CopyPart- 00:12:01.595 [2024-04-24 10:07:14.651453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13256266026652776375 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.595 [2024-04-24 10:07:14.651481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.595 [2024-04-24 10:07:14.651531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.595 [2024-04-24 10:07:14.651547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.595 [2024-04-24 10:07:14.651601] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.595 [2024-04-24 10:07:14.651615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.595 [2024-04-24 10:07:14.651669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.595 [2024-04-24 10:07:14.651686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:01.595 [2024-04-24 10:07:14.651739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.595 [2024-04-24 10:07:14.651753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:01.595 #59 NEW cov: 11783 ft: 13752 corp: 10/741b lim: 100 exec/s: 0 rss: 68Mb L: 100/100 MS: 1 ChangeBit- 00:12:01.595 [2024-04-24 10:07:14.691406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.595 [2024-04-24 10:07:14.691433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.595 [2024-04-24 10:07:14.691481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.595 [2024-04-24 10:07:14.691497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.691551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.691567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.691620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251626285760695 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.691637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:01.596 #60 NEW cov: 11783 ft: 13780 corp: 11/829b lim: 100 exec/s: 0 rss: 68Mb L: 88/100 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:12:01.596 [2024-04-24 10:07:14.731395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.731423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.731472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251630441772983 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.731488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.731544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.731558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.596 #61 NEW cov: 11783 ft: 13847 corp: 12/907b lim: 100 exec/s: 0 rss: 68Mb L: 78/100 MS: 1 ChangeBit- 00:12:01.596 [2024-04-24 10:07:14.771768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13256266026652776375 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.771795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.771844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.771860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.771914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.771929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.771985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.771999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.772053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.772073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:01.596 #62 NEW cov: 11783 ft: 13892 corp: 13/1007b lim: 100 exec/s: 0 rss: 68Mb L: 100/100 MS: 1 CrossOver- 00:12:01.596 [2024-04-24 10:07:14.811766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.811794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.811857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.811873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.811929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.811946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.812004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251626285760695 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.812018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:01.596 #63 NEW cov: 11783 ft: 13915 corp: 14/1095b lim: 100 exec/s: 0 rss: 68Mb L: 88/100 MS: 1 ChangeBinInt- 00:12:01.596 [2024-04-24 10:07:14.851883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.851910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.851957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.851974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.852028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.852042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.596 [2024-04-24 10:07:14.852104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.596 [2024-04-24 10:07:14.852119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:01.596 #64 NEW cov: 11783 ft: 13928 corp: 15/1188b lim: 100 exec/s: 0 rss: 69Mb L: 93/100 MS: 1 CrossOver- 00:12:01.855 [2024-04-24 10:07:14.891810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:14.891837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:14.891876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:14.891893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:14.891965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:14.891982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.856 NEW_FUNC[1/1]: 0x195af00 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:12:01.856 #65 NEW cov: 11806 ft: 14100 corp: 16/1257b lim: 100 exec/s: 0 rss: 69Mb L: 69/100 MS: 1 EraseBytes- 00:12:01.856 [2024-04-24 10:07:14.941971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:14.941999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:14.942037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:14.942052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:14.942115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:14.942131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.856 #66 NEW cov: 11806 ft: 14135 corp: 17/1334b lim: 100 exec/s: 0 rss: 69Mb L: 77/100 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:12:01.856 [2024-04-24 10:07:14.982216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:14.982246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:14.982302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:14.982318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:14.982374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:14.982390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:14.982447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:14.982462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:01.856 #67 NEW cov: 11806 ft: 14183 corp: 18/1414b lim: 100 exec/s: 0 rss: 69Mb L: 80/100 MS: 1 ChangeByte- 00:12:01.856 [2024-04-24 10:07:15.022240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4889743558363851626 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:15.022267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:15.022307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15842497851538791387 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:15.022324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:15.022380] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15842497851538791387 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:15.022396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.856 #72 NEW cov: 11806 ft: 14209 corp: 19/1483b lim: 100 exec/s: 72 rss: 69Mb L: 69/100 MS: 5 CMP-ChangeBinInt-EraseBytes-ShuffleBytes-InsertRepeatedBytes- DE: "\377\010+j\301\271d*"- 00:12:01.856 [2024-04-24 10:07:15.062479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:15.062506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:15.062550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:15.062565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:15.062621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:15.062635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:15.062692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251626285760695 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:15.062707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:01.856 #73 NEW cov: 11806 ft: 14245 corp: 20/1571b lim: 100 exec/s: 73 rss: 69Mb L: 88/100 MS: 1 ChangeBit- 00:12:01.856 [2024-04-24 10:07:15.102615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:15.102643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:15.102690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368007607 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:15.102706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:15.102762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:15.102777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:01.856 [2024-04-24 10:07:15.102834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.856 [2024-04-24 10:07:15.102848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:01.856 #74 NEW cov: 11806 ft: 14247 corp: 21/1652b lim: 100 exec/s: 74 rss: 69Mb L: 81/100 MS: 1 InsertByte- 00:12:02.116 [2024-04-24 10:07:15.142762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13256266026652776375 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.142790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.142839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.142855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.142928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.142943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.143002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629367769015 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.143018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:02.116 #75 NEW cov: 11806 ft: 14280 corp: 22/1737b lim: 100 exec/s: 75 rss: 69Mb L: 85/100 MS: 1 EraseBytes- 00:12:02.116 [2024-04-24 10:07:15.183024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13256266026652776375 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.183053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.183111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.183129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.183183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.183198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.183254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.183272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.183329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.183345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:02.116 #76 NEW cov: 11806 ft: 14290 corp: 23/1837b lim: 100 exec/s: 76 rss: 69Mb L: 100/100 MS: 1 ShuffleBytes- 00:12:02.116 [2024-04-24 10:07:15.222622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.222649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.222693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.222710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.116 #77 NEW cov: 11806 ft: 14307 corp: 24/1888b lim: 100 exec/s: 77 rss: 69Mb L: 51/100 MS: 1 CopyPart- 00:12:02.116 [2024-04-24 10:07:15.262890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.262917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.262954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251630441772983 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.262970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.263028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:65289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.263043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.116 #78 NEW cov: 11806 ft: 14340 corp: 25/1966b lim: 100 exec/s: 78 rss: 70Mb L: 78/100 MS: 1 CMP- DE: "\377\010+j\342\240E\""- 00:12:02.116 [2024-04-24 10:07:15.302872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.302899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.302945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:11548674988972927714 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.302961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.116 #79 NEW cov: 11806 ft: 14358 corp: 26/2025b lim: 100 exec/s: 79 rss: 70Mb L: 59/100 MS: 1 PersAutoDict- DE: "\377\010+j\342\240E\""- 00:12:02.116 [2024-04-24 10:07:15.343348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.343375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.343425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.343444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.343498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.343513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.116 [2024-04-24 10:07:15.343568] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.116 [2024-04-24 10:07:15.343583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:02.116 #80 NEW cov: 11806 ft: 14377 corp: 27/2124b lim: 100 exec/s: 80 rss: 70Mb L: 99/100 MS: 1 EraseBytes- 00:12:02.116 [2024-04-24 10:07:15.383614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13256266026652776375 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.117 [2024-04-24 10:07:15.383643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.117 [2024-04-24 10:07:15.383709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.117 [2024-04-24 10:07:15.383725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.117 [2024-04-24 10:07:15.383782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.117 [2024-04-24 10:07:15.383797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.117 [2024-04-24 10:07:15.383852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.117 [2024-04-24 10:07:15.383867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:02.117 [2024-04-24 10:07:15.383923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:13238251626285760512 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.117 [2024-04-24 10:07:15.383939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:02.376 #81 NEW cov: 11806 ft: 14383 corp: 28/2224b lim: 100 exec/s: 81 rss: 70Mb L: 100/100 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:12:02.376 [2024-04-24 10:07:15.423384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:2092 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.376 [2024-04-24 10:07:15.423410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.376 [2024-04-24 10:07:15.423457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.376 [2024-04-24 10:07:15.423473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.376 [2024-04-24 10:07:15.423545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.376 [2024-04-24 10:07:15.423561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.376 #82 NEW cov: 11806 ft: 14384 corp: 29/2293b lim: 100 exec/s: 82 rss: 70Mb L: 69/100 MS: 1 PersAutoDict- DE: "\377\010+j\342\240E\""- 00:12:02.376 [2024-04-24 10:07:15.463472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.376 [2024-04-24 10:07:15.463502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.376 [2024-04-24 10:07:15.463549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251630441772983 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.376 [2024-04-24 10:07:15.463564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.376 [2024-04-24 10:07:15.463618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.376 [2024-04-24 10:07:15.463633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.376 #83 NEW cov: 11806 ft: 14403 corp: 30/2371b lim: 100 exec/s: 83 rss: 70Mb L: 78/100 MS: 1 ChangeBinInt- 00:12:02.376 [2024-04-24 10:07:15.503605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13256266026652776375 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.376 [2024-04-24 10:07:15.503632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.376 [2024-04-24 10:07:15.503669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.376 [2024-04-24 10:07:15.503684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.376 [2024-04-24 10:07:15.503740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.376 [2024-04-24 10:07:15.503755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.376 #84 NEW cov: 11806 ft: 14475 corp: 31/2446b lim: 100 exec/s: 84 rss: 70Mb L: 75/100 MS: 1 EraseBytes- 00:12:02.376 [2024-04-24 10:07:15.543956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.376 [2024-04-24 10:07:15.543984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.376 [2024-04-24 10:07:15.544038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251630441248695 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.376 [2024-04-24 10:07:15.544054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.376 [2024-04-24 10:07:15.544116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:65289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.376 [2024-04-24 10:07:15.544133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.376 #85 NEW cov: 11806 ft: 14554 corp: 32/2524b lim: 100 exec/s: 85 rss: 70Mb L: 78/100 MS: 1 ChangeBinInt- 00:12:02.376 [2024-04-24 10:07:15.583876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294251 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.376 [2024-04-24 10:07:15.583904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.377 [2024-04-24 10:07:15.583957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629372223415 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.377 [2024-04-24 10:07:15.583975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.377 [2024-04-24 10:07:15.584028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.377 [2024-04-24 10:07:15.584046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.377 #86 NEW cov: 11806 ft: 14559 corp: 33/2603b lim: 100 exec/s: 86 rss: 70Mb L: 79/100 MS: 1 InsertByte- 00:12:02.377 [2024-04-24 10:07:15.624281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13256266026652776375 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.377 [2024-04-24 10:07:15.624308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.377 [2024-04-24 10:07:15.624365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.377 [2024-04-24 10:07:15.624380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.377 [2024-04-24 10:07:15.624433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1060856922112 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.377 [2024-04-24 10:07:15.624449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.377 [2024-04-24 10:07:15.624501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.377 [2024-04-24 10:07:15.624516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:02.377 [2024-04-24 10:07:15.624570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.377 [2024-04-24 10:07:15.624586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:02.377 #87 NEW cov: 11806 ft: 14573 corp: 34/2703b lim: 100 exec/s: 87 rss: 70Mb L: 100/100 MS: 1 CrossOver- 00:12:02.636 [2024-04-24 10:07:15.664243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.636 [2024-04-24 10:07:15.664270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.636 [2024-04-24 10:07:15.664317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.636 [2024-04-24 10:07:15.664333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.636 [2024-04-24 10:07:15.664388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.636 [2024-04-24 10:07:15.664403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.636 [2024-04-24 10:07:15.664458] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.636 [2024-04-24 10:07:15.664473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:02.636 #88 NEW cov: 11806 ft: 14652 corp: 35/2783b lim: 100 exec/s: 88 rss: 70Mb L: 80/100 MS: 1 ShuffleBytes- 00:12:02.636 [2024-04-24 10:07:15.704329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.636 [2024-04-24 10:07:15.704355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.636 [2024-04-24 10:07:15.704399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251630441772983 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.636 [2024-04-24 10:07:15.704413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.636 [2024-04-24 10:07:15.704483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.636 [2024-04-24 10:07:15.704499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.636 [2024-04-24 10:07:15.704554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.636 [2024-04-24 10:07:15.704570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:02.636 #89 NEW cov: 11806 ft: 14656 corp: 36/2869b lim: 100 exec/s: 89 rss: 70Mb L: 86/100 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:12:02.636 [2024-04-24 10:07:15.744312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13256266026652776375 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.636 [2024-04-24 10:07:15.744340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.636 [2024-04-24 10:07:15.744394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.636 [2024-04-24 10:07:15.744412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.636 [2024-04-24 10:07:15.744468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:51510706717065216 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.636 [2024-04-24 10:07:15.744484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.636 #90 NEW cov: 11806 ft: 14666 corp: 37/2937b lim: 100 exec/s: 90 rss: 70Mb L: 68/100 MS: 1 EraseBytes- 00:12:02.636 [2024-04-24 10:07:15.784591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4889743558363851626 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.636 [2024-04-24 10:07:15.784619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.636 [2024-04-24 10:07:15.784687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15842497851538791387 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.636 [2024-04-24 10:07:15.784704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.637 [2024-04-24 10:07:15.784757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15842497851538791387 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.784771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.637 [2024-04-24 10:07:15.784825] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.784840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:02.637 #91 NEW cov: 11806 ft: 14667 corp: 38/3035b lim: 100 exec/s: 91 rss: 71Mb L: 98/100 MS: 1 InsertRepeatedBytes- 00:12:02.637 [2024-04-24 10:07:15.824680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251628143294391 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.824707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.637 [2024-04-24 10:07:15.824750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251630441772983 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.824766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.637 [2024-04-24 10:07:15.824821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:789061285632 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.824837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.637 [2024-04-24 10:07:15.824890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.824905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:02.637 #92 NEW cov: 11806 ft: 14672 corp: 39/3117b lim: 100 exec/s: 92 rss: 71Mb L: 82/100 MS: 1 InsertRepeatedBytes- 00:12:02.637 [2024-04-24 10:07:15.864952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13256266026652776375 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.864980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.637 [2024-04-24 10:07:15.865031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.865046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.637 [2024-04-24 10:07:15.865106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.865121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.637 [2024-04-24 10:07:15.865175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.865190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:02.637 [2024-04-24 10:07:15.865245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.865259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:02.637 #93 NEW cov: 11806 ft: 14680 corp: 40/3217b lim: 100 exec/s: 93 rss: 71Mb L: 100/100 MS: 1 ShuffleBytes- 00:12:02.637 [2024-04-24 10:07:15.905091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13256266026652776375 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.905119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.637 [2024-04-24 10:07:15.905191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.905207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.637 [2024-04-24 10:07:15.905260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.905276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.637 [2024-04-24 10:07:15.905331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.905347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:02.637 [2024-04-24 10:07:15.905404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:13238251629368031159 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.637 [2024-04-24 10:07:15.905419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:02.896 #94 NEW cov: 11806 ft: 14685 corp: 41/3317b lim: 100 exec/s: 94 rss: 71Mb L: 100/100 MS: 1 ChangeBit- 00:12:02.896 [2024-04-24 10:07:15.945006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4889743558363851626 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.896 [2024-04-24 10:07:15.945034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.896 [2024-04-24 10:07:15.945080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15842497851538791387 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.896 [2024-04-24 10:07:15.945096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.896 [2024-04-24 10:07:15.945151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15842497851538791387 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.896 [2024-04-24 10:07:15.945165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.896 [2024-04-24 10:07:15.945221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.896 [2024-04-24 10:07:15.945236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:02.896 #95 NEW cov: 11806 ft: 14762 corp: 42/3415b lim: 100 exec/s: 95 rss: 71Mb L: 98/100 MS: 1 ShuffleBytes- 00:12:02.896 [2024-04-24 10:07:15.985187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4889743558363851626 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.896 [2024-04-24 10:07:15.985214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:02.896 [2024-04-24 10:07:15.985254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15842497851538791387 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.896 [2024-04-24 10:07:15.985269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:02.897 [2024-04-24 10:07:15.985324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15842497851538791419 len:56284 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.897 [2024-04-24 10:07:15.985339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:02.897 [2024-04-24 10:07:15.985393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.897 [2024-04-24 10:07:15.985408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:02.897 #96 NEW cov: 11806 ft: 14771 corp: 43/3513b lim: 100 exec/s: 48 rss: 71Mb L: 98/100 MS: 1 ChangeBit- 00:12:02.897 #96 DONE cov: 11806 ft: 14771 corp: 43/3513b lim: 100 exec/s: 48 rss: 71Mb 00:12:02.897 ###### Recommended dictionary. ###### 00:12:02.897 "\000\000\000\000\000\000\000\000" # Uses: 4 00:12:02.897 "\377\010+j\301\271d*" # Uses: 0 00:12:02.897 "\377\010+j\342\240E\"" # Uses: 2 00:12:02.897 ###### End of recommended dictionary. ###### 00:12:02.897 Done 96 runs in 2 second(s) 00:12:02.897 10:07:16 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_24.conf 00:12:02.897 10:07:16 -- ../common.sh@72 -- # (( i++ )) 00:12:02.897 10:07:16 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:02.897 10:07:16 -- nvmf/run.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:12:02.897 00:12:02.897 real 1m7.703s 00:12:02.897 user 1m40.544s 00:12:02.897 sys 0m10.273s 00:12:02.897 10:07:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.897 10:07:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.897 ************************************ 00:12:02.897 END TEST nvmf_fuzz 00:12:02.897 ************************************ 00:12:03.158 10:07:16 -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:12:03.158 10:07:16 -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:12:03.158 10:07:16 -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:12:03.158 10:07:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:03.158 10:07:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:03.158 10:07:16 -- common/autotest_common.sh@10 -- # set +x 00:12:03.158 ************************************ 00:12:03.158 START TEST vfio_fuzz 00:12:03.158 ************************************ 00:12:03.158 10:07:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:12:03.158 * Looking for test storage... 00:12:03.158 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:03.158 10:07:16 -- vfio/run.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:12:03.158 10:07:16 -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:12:03.158 10:07:16 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:03.158 10:07:16 -- common/autotest_common.sh@34 -- # set -e 00:12:03.158 10:07:16 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:03.158 10:07:16 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:03.158 10:07:16 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:03.158 10:07:16 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:12:03.158 10:07:16 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:03.158 10:07:16 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:03.158 10:07:16 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:03.158 10:07:16 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:03.158 10:07:16 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:03.158 10:07:16 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:03.158 10:07:16 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:03.158 10:07:16 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:03.158 10:07:16 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:03.158 10:07:16 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:03.158 10:07:16 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:03.158 10:07:16 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:03.158 10:07:16 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:03.158 10:07:16 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:03.158 10:07:16 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:03.158 10:07:16 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:03.158 10:07:16 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:03.159 10:07:16 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:03.159 10:07:16 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:12:03.159 10:07:16 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:03.159 10:07:16 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:03.159 10:07:16 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:03.159 10:07:16 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:03.159 10:07:16 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:03.159 10:07:16 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:03.159 10:07:16 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:03.159 10:07:16 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:03.159 10:07:16 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:03.159 10:07:16 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:03.159 10:07:16 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:03.159 10:07:16 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:03.159 10:07:16 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:03.159 10:07:16 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:03.159 10:07:16 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:12:03.159 10:07:16 -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:12:03.159 10:07:16 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:12:03.159 10:07:16 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:03.159 10:07:16 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:03.159 10:07:16 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:03.159 10:07:16 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:03.159 10:07:16 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:03.159 10:07:16 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:03.159 10:07:16 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:03.159 10:07:16 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:03.159 10:07:16 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:03.159 10:07:16 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:12:03.159 10:07:16 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:12:03.159 10:07:16 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:03.159 10:07:16 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:12:03.159 10:07:16 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:12:03.159 10:07:16 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:12:03.159 10:07:16 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:12:03.159 10:07:16 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:12:03.159 10:07:16 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:12:03.159 10:07:16 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:12:03.159 10:07:16 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:12:03.159 10:07:16 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:12:03.159 10:07:16 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:12:03.159 10:07:16 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:12:03.159 10:07:16 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:12:03.159 10:07:16 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:12:03.159 10:07:16 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:12:03.159 10:07:16 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:12:03.159 10:07:16 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:12:03.159 10:07:16 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:12:03.159 10:07:16 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:03.159 10:07:16 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:12:03.159 10:07:16 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:12:03.159 10:07:16 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:12:03.159 10:07:16 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:12:03.159 10:07:16 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:12:03.159 10:07:16 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:12:03.159 10:07:16 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:12:03.159 10:07:16 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:12:03.159 10:07:16 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:12:03.159 10:07:16 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:12:03.159 10:07:16 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:03.159 10:07:16 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:12:03.159 10:07:16 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:12:03.159 10:07:16 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:12:03.159 10:07:16 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:12:03.159 10:07:16 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:12:03.159 10:07:16 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:12:03.159 10:07:16 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:12:03.159 10:07:16 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:12:03.159 10:07:16 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:12:03.159 10:07:16 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:12:03.159 10:07:16 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:03.159 10:07:16 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:03.159 10:07:16 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:03.159 10:07:16 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:03.159 10:07:16 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:03.159 10:07:16 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:03.159 10:07:16 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:12:03.159 10:07:16 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:03.159 #define SPDK_CONFIG_H 00:12:03.159 #define SPDK_CONFIG_APPS 1 00:12:03.159 #define SPDK_CONFIG_ARCH native 00:12:03.159 #undef SPDK_CONFIG_ASAN 00:12:03.159 #undef SPDK_CONFIG_AVAHI 00:12:03.159 #undef SPDK_CONFIG_CET 00:12:03.159 #define SPDK_CONFIG_COVERAGE 1 00:12:03.159 #define SPDK_CONFIG_CROSS_PREFIX 00:12:03.159 #undef SPDK_CONFIG_CRYPTO 00:12:03.159 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:03.159 #undef SPDK_CONFIG_CUSTOMOCF 00:12:03.159 #undef SPDK_CONFIG_DAOS 00:12:03.159 #define SPDK_CONFIG_DAOS_DIR 00:12:03.159 #define SPDK_CONFIG_DEBUG 1 00:12:03.159 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:03.159 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:12:03.159 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:03.159 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:03.159 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:03.159 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:12:03.159 #define SPDK_CONFIG_EXAMPLES 1 00:12:03.159 #undef SPDK_CONFIG_FC 00:12:03.159 #define SPDK_CONFIG_FC_PATH 00:12:03.159 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:03.159 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:03.159 #undef SPDK_CONFIG_FUSE 00:12:03.159 #define SPDK_CONFIG_FUZZER 1 00:12:03.159 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:12:03.159 #undef SPDK_CONFIG_GOLANG 00:12:03.159 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:03.159 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:03.159 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:03.159 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:03.159 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:03.159 #define SPDK_CONFIG_IDXD 1 00:12:03.159 #undef SPDK_CONFIG_IDXD_KERNEL 00:12:03.159 #undef SPDK_CONFIG_IPSEC_MB 00:12:03.159 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:03.159 #define SPDK_CONFIG_ISAL 1 00:12:03.159 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:03.159 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:03.159 #define SPDK_CONFIG_LIBDIR 00:12:03.159 #undef SPDK_CONFIG_LTO 00:12:03.159 #define SPDK_CONFIG_MAX_LCORES 00:12:03.159 #define SPDK_CONFIG_NVME_CUSE 1 00:12:03.159 #undef SPDK_CONFIG_OCF 00:12:03.159 #define SPDK_CONFIG_OCF_PATH 00:12:03.159 #define SPDK_CONFIG_OPENSSL_PATH 00:12:03.159 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:03.159 #undef SPDK_CONFIG_PGO_USE 00:12:03.159 #define SPDK_CONFIG_PREFIX /usr/local 00:12:03.159 #undef SPDK_CONFIG_RAID5F 00:12:03.159 #undef SPDK_CONFIG_RBD 00:12:03.159 #define SPDK_CONFIG_RDMA 1 00:12:03.159 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:03.159 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:03.159 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:03.159 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:03.159 #undef SPDK_CONFIG_SHARED 00:12:03.159 #undef SPDK_CONFIG_SMA 00:12:03.159 #define SPDK_CONFIG_TESTS 1 00:12:03.159 #undef SPDK_CONFIG_TSAN 00:12:03.159 #define SPDK_CONFIG_UBLK 1 00:12:03.159 #define SPDK_CONFIG_UBSAN 1 00:12:03.159 #undef SPDK_CONFIG_UNIT_TESTS 00:12:03.159 #undef SPDK_CONFIG_URING 00:12:03.159 #define SPDK_CONFIG_URING_PATH 00:12:03.159 #undef SPDK_CONFIG_URING_ZNS 00:12:03.159 #undef SPDK_CONFIG_USDT 00:12:03.159 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:03.159 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:03.159 #define SPDK_CONFIG_VFIO_USER 1 00:12:03.159 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:03.159 #define SPDK_CONFIG_VHOST 1 00:12:03.159 #define SPDK_CONFIG_VIRTIO 1 00:12:03.159 #undef SPDK_CONFIG_VTUNE 00:12:03.159 #define SPDK_CONFIG_VTUNE_DIR 00:12:03.159 #define SPDK_CONFIG_WERROR 1 00:12:03.159 #define SPDK_CONFIG_WPDK_DIR 00:12:03.159 #undef SPDK_CONFIG_XNVME 00:12:03.159 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:03.159 10:07:16 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:03.159 10:07:16 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:12:03.159 10:07:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.159 10:07:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.159 10:07:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.159 10:07:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.160 10:07:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.160 10:07:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.160 10:07:16 -- paths/export.sh@5 -- # export PATH 00:12:03.160 10:07:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.160 10:07:16 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:12:03.160 10:07:16 -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:12:03.160 10:07:16 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:12:03.160 10:07:16 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:12:03.160 10:07:16 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:03.160 10:07:16 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:12:03.160 10:07:16 -- pm/common@16 -- # TEST_TAG=N/A 00:12:03.160 10:07:16 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:12:03.160 10:07:16 -- common/autotest_common.sh@52 -- # : 1 00:12:03.160 10:07:16 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:12:03.160 10:07:16 -- common/autotest_common.sh@56 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:03.160 10:07:16 -- common/autotest_common.sh@58 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:12:03.160 10:07:16 -- common/autotest_common.sh@60 -- # : 1 00:12:03.160 10:07:16 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:03.160 10:07:16 -- common/autotest_common.sh@62 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:12:03.160 10:07:16 -- common/autotest_common.sh@64 -- # : 00:12:03.160 10:07:16 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:12:03.160 10:07:16 -- common/autotest_common.sh@66 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:12:03.160 10:07:16 -- common/autotest_common.sh@68 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:12:03.160 10:07:16 -- common/autotest_common.sh@70 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:12:03.160 10:07:16 -- common/autotest_common.sh@72 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:03.160 10:07:16 -- common/autotest_common.sh@74 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:12:03.160 10:07:16 -- common/autotest_common.sh@76 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:12:03.160 10:07:16 -- common/autotest_common.sh@78 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:12:03.160 10:07:16 -- common/autotest_common.sh@80 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:12:03.160 10:07:16 -- common/autotest_common.sh@82 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:12:03.160 10:07:16 -- common/autotest_common.sh@84 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:12:03.160 10:07:16 -- common/autotest_common.sh@86 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:12:03.160 10:07:16 -- common/autotest_common.sh@88 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:12:03.160 10:07:16 -- common/autotest_common.sh@90 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:03.160 10:07:16 -- common/autotest_common.sh@92 -- # : 1 00:12:03.160 10:07:16 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:12:03.160 10:07:16 -- common/autotest_common.sh@94 -- # : 1 00:12:03.160 10:07:16 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:12:03.160 10:07:16 -- common/autotest_common.sh@96 -- # : rdma 00:12:03.160 10:07:16 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:03.160 10:07:16 -- common/autotest_common.sh@98 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:12:03.160 10:07:16 -- common/autotest_common.sh@100 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:12:03.160 10:07:16 -- common/autotest_common.sh@102 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:12:03.160 10:07:16 -- common/autotest_common.sh@104 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:12:03.160 10:07:16 -- common/autotest_common.sh@106 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:12:03.160 10:07:16 -- common/autotest_common.sh@108 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:12:03.160 10:07:16 -- common/autotest_common.sh@110 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:12:03.160 10:07:16 -- common/autotest_common.sh@112 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:03.160 10:07:16 -- common/autotest_common.sh@114 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:12:03.160 10:07:16 -- common/autotest_common.sh@116 -- # : 1 00:12:03.160 10:07:16 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:12:03.160 10:07:16 -- common/autotest_common.sh@118 -- # : 00:12:03.160 10:07:16 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:03.160 10:07:16 -- common/autotest_common.sh@120 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:12:03.160 10:07:16 -- common/autotest_common.sh@122 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:12:03.160 10:07:16 -- common/autotest_common.sh@124 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:12:03.160 10:07:16 -- common/autotest_common.sh@126 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:12:03.160 10:07:16 -- common/autotest_common.sh@128 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:12:03.160 10:07:16 -- common/autotest_common.sh@130 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:12:03.160 10:07:16 -- common/autotest_common.sh@132 -- # : 00:12:03.160 10:07:16 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:12:03.160 10:07:16 -- common/autotest_common.sh@134 -- # : true 00:12:03.160 10:07:16 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:12:03.160 10:07:16 -- common/autotest_common.sh@136 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:12:03.160 10:07:16 -- common/autotest_common.sh@138 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:12:03.160 10:07:16 -- common/autotest_common.sh@140 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:12:03.160 10:07:16 -- common/autotest_common.sh@142 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:12:03.160 10:07:16 -- common/autotest_common.sh@144 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:12:03.160 10:07:16 -- common/autotest_common.sh@146 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:12:03.160 10:07:16 -- common/autotest_common.sh@148 -- # : 00:12:03.160 10:07:16 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:12:03.160 10:07:16 -- common/autotest_common.sh@150 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:12:03.160 10:07:16 -- common/autotest_common.sh@152 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:12:03.160 10:07:16 -- common/autotest_common.sh@154 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:12:03.160 10:07:16 -- common/autotest_common.sh@156 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:12:03.160 10:07:16 -- common/autotest_common.sh@158 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:12:03.160 10:07:16 -- common/autotest_common.sh@160 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:12:03.160 10:07:16 -- common/autotest_common.sh@163 -- # : 00:12:03.160 10:07:16 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:12:03.160 10:07:16 -- common/autotest_common.sh@165 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:12:03.160 10:07:16 -- common/autotest_common.sh@167 -- # : 0 00:12:03.160 10:07:16 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:03.160 10:07:16 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:12:03.160 10:07:16 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:12:03.160 10:07:16 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:12:03.160 10:07:16 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:12:03.160 10:07:16 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:03.160 10:07:16 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:03.161 10:07:16 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:03.161 10:07:16 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:03.161 10:07:16 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:03.161 10:07:16 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:03.161 10:07:16 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:12:03.161 10:07:16 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:12:03.161 10:07:16 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:03.161 10:07:16 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:12:03.161 10:07:16 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:03.161 10:07:16 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:03.161 10:07:16 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:03.161 10:07:16 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:03.161 10:07:16 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:03.161 10:07:16 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:12:03.161 10:07:16 -- common/autotest_common.sh@196 -- # cat 00:12:03.161 10:07:16 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:12:03.161 10:07:16 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:03.161 10:07:16 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:03.161 10:07:16 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:03.161 10:07:16 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:03.161 10:07:16 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:12:03.161 10:07:16 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:12:03.161 10:07:16 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:12:03.161 10:07:16 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:12:03.161 10:07:16 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:12:03.161 10:07:16 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:12:03.161 10:07:16 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:03.161 10:07:16 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:03.161 10:07:16 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:03.161 10:07:16 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:03.161 10:07:16 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:03.161 10:07:16 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:03.161 10:07:16 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:03.161 10:07:16 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:03.161 10:07:16 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:12:03.161 10:07:16 -- common/autotest_common.sh@249 -- # export valgrind= 00:12:03.161 10:07:16 -- common/autotest_common.sh@249 -- # valgrind= 00:12:03.161 10:07:16 -- common/autotest_common.sh@255 -- # uname -s 00:12:03.161 10:07:16 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:12:03.161 10:07:16 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:12:03.161 10:07:16 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:12:03.161 10:07:16 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:12:03.161 10:07:16 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:12:03.161 10:07:16 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:12:03.161 10:07:16 -- common/autotest_common.sh@265 -- # MAKE=make 00:12:03.161 10:07:16 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j72 00:12:03.161 10:07:16 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:12:03.161 10:07:16 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:12:03.161 10:07:16 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:12:03.161 10:07:16 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:12:03.161 10:07:16 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:12:03.161 10:07:16 -- common/autotest_common.sh@309 -- # [[ -z 1177054 ]] 00:12:03.161 10:07:16 -- common/autotest_common.sh@309 -- # kill -0 1177054 00:12:03.161 10:07:16 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:12:03.161 10:07:16 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:12:03.161 10:07:16 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:12:03.161 10:07:16 -- common/autotest_common.sh@322 -- # local mount target_dir 00:12:03.161 10:07:16 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:12:03.161 10:07:16 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:12:03.161 10:07:16 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:12:03.161 10:07:16 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:12:03.161 10:07:16 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.wRB3lY 00:12:03.161 10:07:16 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:03.161 10:07:16 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:12:03.161 10:07:16 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:12:03.161 10:07:16 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.wRB3lY/tests/vfio /tmp/spdk.wRB3lY 00:12:03.161 10:07:16 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:12:03.161 10:07:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:03.161 10:07:16 -- common/autotest_common.sh@318 -- # df -T 00:12:03.161 10:07:16 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:12:03.161 10:07:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:12:03.161 10:07:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:12:03.161 10:07:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:12:03.161 10:07:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:12:03.161 10:07:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:12:03.161 10:07:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:03.161 10:07:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:12:03.161 10:07:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:12:03.161 10:07:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=818380800 00:12:03.161 10:07:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:12:03.161 10:07:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=4466049024 00:12:03.161 10:07:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:03.161 10:07:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:12:03.161 10:07:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:12:03.161 10:07:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=87552868352 00:12:03.161 10:07:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=94508572672 00:12:03.161 10:07:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=6955704320 00:12:03.161 10:07:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:03.161 10:07:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:12:03.161 10:07:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:12:03.161 10:07:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=47251693568 00:12:03.161 10:07:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47254286336 00:12:03.161 10:07:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:12:03.161 10:07:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:03.161 10:07:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:12:03.161 10:07:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:12:03.161 10:07:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=18895835136 00:12:03.161 10:07:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=18901716992 00:12:03.161 10:07:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=5881856 00:12:03.161 10:07:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:03.161 10:07:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:12:03.161 10:07:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:12:03.161 10:07:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=47253860352 00:12:03.161 10:07:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47254286336 00:12:03.161 10:07:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=425984 00:12:03.161 10:07:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:03.161 10:07:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:12:03.162 10:07:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:12:03.162 10:07:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=9450852352 00:12:03.162 10:07:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9450856448 00:12:03.162 10:07:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:12:03.162 10:07:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:03.162 10:07:16 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:12:03.162 * Looking for test storage... 00:12:03.162 10:07:16 -- common/autotest_common.sh@359 -- # local target_space new_size 00:12:03.162 10:07:16 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:12:03.162 10:07:16 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:03.162 10:07:16 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:03.162 10:07:16 -- common/autotest_common.sh@363 -- # mount=/ 00:12:03.162 10:07:16 -- common/autotest_common.sh@365 -- # target_space=87552868352 00:12:03.162 10:07:16 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:12:03.162 10:07:16 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:12:03.162 10:07:16 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:12:03.162 10:07:16 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:12:03.162 10:07:16 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:12:03.162 10:07:16 -- common/autotest_common.sh@372 -- # new_size=9170296832 00:12:03.162 10:07:16 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:03.162 10:07:16 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:03.162 10:07:16 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:03.162 10:07:16 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:03.162 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:03.162 10:07:16 -- common/autotest_common.sh@380 -- # return 0 00:12:03.162 10:07:16 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:12:03.162 10:07:16 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:12:03.162 10:07:16 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:03.162 10:07:16 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:03.162 10:07:16 -- common/autotest_common.sh@1672 -- # true 00:12:03.162 10:07:16 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:12:03.162 10:07:16 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:12:03.162 10:07:16 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:12:03.162 10:07:16 -- common/autotest_common.sh@27 -- # exec 00:12:03.162 10:07:16 -- common/autotest_common.sh@29 -- # exec 00:12:03.162 10:07:16 -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:03.162 10:07:16 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:03.162 10:07:16 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:03.162 10:07:16 -- common/autotest_common.sh@18 -- # set -x 00:12:03.162 10:07:16 -- vfio/run.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:12:03.162 10:07:16 -- ../common.sh@8 -- # pids=() 00:12:03.162 10:07:16 -- vfio/run.sh@58 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:12:03.162 10:07:16 -- vfio/run.sh@59 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:12:03.162 10:07:16 -- vfio/run.sh@59 -- # fuzz_num=7 00:12:03.162 10:07:16 -- vfio/run.sh@60 -- # (( fuzz_num != 0 )) 00:12:03.162 10:07:16 -- vfio/run.sh@62 -- # trap 'cleanup /tmp/vfio-user-*; exit 1' SIGINT SIGTERM EXIT 00:12:03.162 10:07:16 -- vfio/run.sh@65 -- # mem_size=0 00:12:03.162 10:07:16 -- vfio/run.sh@66 -- # [[ 1 -eq 1 ]] 00:12:03.162 10:07:16 -- vfio/run.sh@67 -- # start_llvm_fuzz_short 7 1 00:12:03.162 10:07:16 -- ../common.sh@69 -- # local fuzz_num=7 00:12:03.162 10:07:16 -- ../common.sh@70 -- # local time=1 00:12:03.162 10:07:16 -- ../common.sh@72 -- # (( i = 0 )) 00:12:03.162 10:07:16 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:03.162 10:07:16 -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:12:03.162 10:07:16 -- vfio/run.sh@22 -- # local fuzzer_type=0 00:12:03.162 10:07:16 -- vfio/run.sh@23 -- # local timen=1 00:12:03.162 10:07:16 -- vfio/run.sh@24 -- # local core=0x1 00:12:03.162 10:07:16 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:12:03.162 10:07:16 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:12:03.162 10:07:16 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:12:03.162 10:07:16 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:12:03.162 10:07:16 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:12:03.422 10:07:16 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:12:03.422 10:07:16 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:12:03.422 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:03.422 10:07:16 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:12:03.422 [2024-04-24 10:07:16.470485] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:12:03.422 [2024-04-24 10:07:16.470563] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177098 ] 00:12:03.422 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.422 [2024-04-24 10:07:16.550460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.422 [2024-04-24 10:07:16.637525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:03.422 [2024-04-24 10:07:16.637680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.680 INFO: Running with entropic power schedule (0xFF, 100). 00:12:03.680 INFO: Seed: 4055145153 00:12:03.680 INFO: Loaded 1 modules (338449 inline 8-bit counters): 338449 [0x27c7fcc, 0x281a9dd), 00:12:03.680 INFO: Loaded 1 PC tables (338449 PCs): 338449 [0x281a9e0,0x2d44af0), 00:12:03.680 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:12:03.680 INFO: A corpus is not provided, starting from an empty corpus 00:12:03.680 #2 INITED exec/s: 0 rss: 61Mb 00:12:03.680 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:03.680 This may also happen if the target rejected all inputs we tried so far 00:12:04.196 NEW_FUNC[1/621]: 0x47f5d0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:85 00:12:04.196 NEW_FUNC[2/621]: 0x485170 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:04.196 #9 NEW cov: 10704 ft: 10589 corp: 2/50b lim: 60 exec/s: 0 rss: 69Mb L: 49/49 MS: 2 ChangeBit-InsertRepeatedBytes- 00:12:04.455 #10 NEW cov: 10718 ft: 13423 corp: 3/103b lim: 60 exec/s: 0 rss: 70Mb L: 53/53 MS: 1 InsertRepeatedBytes- 00:12:04.455 NEW_FUNC[1/1]: 0x19276a0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:12:04.455 #13 NEW cov: 10735 ft: 14870 corp: 4/133b lim: 60 exec/s: 0 rss: 70Mb L: 30/53 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:12:04.714 #14 NEW cov: 10735 ft: 15760 corp: 5/186b lim: 60 exec/s: 14 rss: 70Mb L: 53/53 MS: 1 ChangeBinInt- 00:12:04.972 #15 NEW cov: 10735 ft: 16112 corp: 6/240b lim: 60 exec/s: 15 rss: 70Mb L: 54/54 MS: 1 InsertByte- 00:12:05.231 #16 NEW cov: 10735 ft: 17017 corp: 7/290b lim: 60 exec/s: 16 rss: 71Mb L: 50/54 MS: 1 InsertByte- 00:12:05.489 #17 NEW cov: 10735 ft: 17372 corp: 8/323b lim: 60 exec/s: 17 rss: 71Mb L: 33/54 MS: 1 InsertRepeatedBytes- 00:12:05.489 #18 NEW cov: 10742 ft: 17525 corp: 9/362b lim: 60 exec/s: 18 rss: 71Mb L: 39/54 MS: 1 EraseBytes- 00:12:05.748 #19 NEW cov: 10742 ft: 17759 corp: 10/415b lim: 60 exec/s: 9 rss: 71Mb L: 53/54 MS: 1 ChangeBinInt- 00:12:05.748 #19 DONE cov: 10742 ft: 17759 corp: 10/415b lim: 60 exec/s: 9 rss: 71Mb 00:12:05.748 Done 19 runs in 2 second(s) 00:12:06.008 10:07:19 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-0 00:12:06.008 10:07:19 -- ../common.sh@72 -- # (( i++ )) 00:12:06.008 10:07:19 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:06.008 10:07:19 -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:12:06.008 10:07:19 -- vfio/run.sh@22 -- # local fuzzer_type=1 00:12:06.008 10:07:19 -- vfio/run.sh@23 -- # local timen=1 00:12:06.008 10:07:19 -- vfio/run.sh@24 -- # local core=0x1 00:12:06.008 10:07:19 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:12:06.008 10:07:19 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:12:06.008 10:07:19 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:12:06.008 10:07:19 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:12:06.008 10:07:19 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:12:06.008 10:07:19 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:12:06.008 10:07:19 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:12:06.008 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:06.008 10:07:19 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:12:06.008 [2024-04-24 10:07:19.241781] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:12:06.008 [2024-04-24 10:07:19.241855] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177463 ] 00:12:06.008 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.267 [2024-04-24 10:07:19.321741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.267 [2024-04-24 10:07:19.403581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:06.267 [2024-04-24 10:07:19.403732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.526 INFO: Running with entropic power schedule (0xFF, 100). 00:12:06.526 INFO: Seed: 2526176760 00:12:06.526 INFO: Loaded 1 modules (338449 inline 8-bit counters): 338449 [0x27c7fcc, 0x281a9dd), 00:12:06.526 INFO: Loaded 1 PC tables (338449 PCs): 338449 [0x281a9e0,0x2d44af0), 00:12:06.526 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:12:06.526 INFO: A corpus is not provided, starting from an empty corpus 00:12:06.526 #2 INITED exec/s: 0 rss: 62Mb 00:12:06.526 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:06.526 This may also happen if the target rejected all inputs we tried so far 00:12:06.526 [2024-04-24 10:07:19.721093] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:06.526 [2024-04-24 10:07:19.721142] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:06.526 [2024-04-24 10:07:19.721162] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:07.044 NEW_FUNC[1/628]: 0x47fb70 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:72 00:12:07.044 NEW_FUNC[2/628]: 0x485170 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:07.044 #6 NEW cov: 10719 ft: 10685 corp: 2/5b lim: 40 exec/s: 0 rss: 69Mb L: 4/4 MS: 4 CopyPart-ChangeByte-CopyPart-CopyPart- 00:12:07.044 [2024-04-24 10:07:20.206445] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:07.044 [2024-04-24 10:07:20.206485] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:07.044 [2024-04-24 10:07:20.206504] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:07.302 #9 NEW cov: 10736 ft: 13414 corp: 3/28b lim: 40 exec/s: 0 rss: 70Mb L: 23/23 MS: 3 ChangeBit-CopyPart-InsertRepeatedBytes- 00:12:07.302 [2024-04-24 10:07:20.410999] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:07.302 [2024-04-24 10:07:20.411026] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:07.302 [2024-04-24 10:07:20.411043] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:07.302 NEW_FUNC[1/1]: 0x19276a0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:12:07.302 #15 NEW cov: 10753 ft: 15178 corp: 4/64b lim: 40 exec/s: 0 rss: 70Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:12:07.562 [2024-04-24 10:07:20.610719] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:07.562 [2024-04-24 10:07:20.610744] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:07.562 [2024-04-24 10:07:20.610761] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:07.562 #19 NEW cov: 10753 ft: 15359 corp: 5/70b lim: 40 exec/s: 19 rss: 71Mb L: 6/36 MS: 4 ChangeBinInt-InsertByte-ShuffleBytes-CrossOver- 00:12:07.562 [2024-04-24 10:07:20.820301] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:07.562 [2024-04-24 10:07:20.820325] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:07.562 [2024-04-24 10:07:20.820342] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:07.821 #20 NEW cov: 10753 ft: 16119 corp: 6/80b lim: 40 exec/s: 20 rss: 71Mb L: 10/36 MS: 1 CMP- DE: "\003\000\000\000"- 00:12:07.821 [2024-04-24 10:07:21.018627] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:07.821 [2024-04-24 10:07:21.018651] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:07.821 [2024-04-24 10:07:21.018669] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:08.080 #21 NEW cov: 10753 ft: 16631 corp: 7/86b lim: 40 exec/s: 21 rss: 71Mb L: 6/36 MS: 1 ChangeByte- 00:12:08.080 [2024-04-24 10:07:21.212859] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:08.080 [2024-04-24 10:07:21.212883] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:08.080 [2024-04-24 10:07:21.212900] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:08.080 #22 NEW cov: 10753 ft: 16771 corp: 8/100b lim: 40 exec/s: 22 rss: 71Mb L: 14/36 MS: 1 CrossOver- 00:12:08.339 [2024-04-24 10:07:21.410896] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:08.339 [2024-04-24 10:07:21.410920] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:08.339 [2024-04-24 10:07:21.410937] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:08.339 #23 NEW cov: 10760 ft: 17106 corp: 9/123b lim: 40 exec/s: 23 rss: 71Mb L: 23/36 MS: 1 ChangeBit- 00:12:08.339 [2024-04-24 10:07:21.604994] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:08.339 [2024-04-24 10:07:21.605017] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:08.339 [2024-04-24 10:07:21.605034] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:08.598 #24 NEW cov: 10760 ft: 17467 corp: 10/137b lim: 40 exec/s: 12 rss: 71Mb L: 14/36 MS: 1 ChangeBit- 00:12:08.598 #24 DONE cov: 10760 ft: 17467 corp: 10/137b lim: 40 exec/s: 12 rss: 71Mb 00:12:08.598 ###### Recommended dictionary. ###### 00:12:08.598 "\003\000\000\000" # Uses: 0 00:12:08.598 ###### End of recommended dictionary. ###### 00:12:08.598 Done 24 runs in 2 second(s) 00:12:08.857 10:07:22 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-1 00:12:08.857 10:07:22 -- ../common.sh@72 -- # (( i++ )) 00:12:08.857 10:07:22 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:08.857 10:07:22 -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:12:08.857 10:07:22 -- vfio/run.sh@22 -- # local fuzzer_type=2 00:12:08.857 10:07:22 -- vfio/run.sh@23 -- # local timen=1 00:12:08.857 10:07:22 -- vfio/run.sh@24 -- # local core=0x1 00:12:08.857 10:07:22 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:12:08.857 10:07:22 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:12:08.857 10:07:22 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:12:08.857 10:07:22 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:12:08.857 10:07:22 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:12:08.857 10:07:22 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:12:08.857 10:07:22 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:12:08.857 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:08.857 10:07:22 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:12:08.857 [2024-04-24 10:07:22.063946] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:12:08.857 [2024-04-24 10:07:22.064056] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177833 ] 00:12:08.857 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.116 [2024-04-24 10:07:22.145037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.116 [2024-04-24 10:07:22.226448] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:09.116 [2024-04-24 10:07:22.226600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.374 INFO: Running with entropic power schedule (0xFF, 100). 00:12:09.374 INFO: Seed: 1056203896 00:12:09.374 INFO: Loaded 1 modules (338449 inline 8-bit counters): 338449 [0x27c7fcc, 0x281a9dd), 00:12:09.374 INFO: Loaded 1 PC tables (338449 PCs): 338449 [0x281a9e0,0x2d44af0), 00:12:09.374 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:12:09.375 INFO: A corpus is not provided, starting from an empty corpus 00:12:09.375 #2 INITED exec/s: 0 rss: 62Mb 00:12:09.375 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:09.375 This may also happen if the target rejected all inputs we tried so far 00:12:09.375 [2024-04-24 10:07:22.523655] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:09.942 NEW_FUNC[1/626]: 0x480550 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:104 00:12:09.942 NEW_FUNC[2/626]: 0x485170 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:09.942 #6 NEW cov: 10702 ft: 10394 corp: 2/51b lim: 80 exec/s: 0 rss: 68Mb L: 50/50 MS: 4 ChangeByte-CopyPart-ChangeBit-InsertRepeatedBytes- 00:12:09.942 [2024-04-24 10:07:22.995711] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:12:09.942 [2024-04-24 10:07:22.995765] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:12:09.942 NEW_FUNC[1/2]: 0x132a3e0 in endpoint_id /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:638 00:12:09.942 NEW_FUNC[2/2]: 0x132a670 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3084 00:12:09.942 #7 NEW cov: 10729 ft: 12993 corp: 3/63b lim: 80 exec/s: 0 rss: 70Mb L: 12/50 MS: 1 InsertRepeatedBytes- 00:12:09.942 [2024-04-24 10:07:23.189051] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:12:09.942 [2024-04-24 10:07:23.189090] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:12:10.201 NEW_FUNC[1/1]: 0x19276a0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:12:10.201 #8 NEW cov: 10746 ft: 13538 corp: 4/75b lim: 80 exec/s: 0 rss: 70Mb L: 12/50 MS: 1 ChangeBit- 00:12:10.201 [2024-04-24 10:07:23.373296] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:10.201 #9 NEW cov: 10746 ft: 14471 corp: 5/95b lim: 80 exec/s: 9 rss: 70Mb L: 20/50 MS: 1 InsertRepeatedBytes- 00:12:10.486 [2024-04-24 10:07:23.551472] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:10.486 #10 NEW cov: 10746 ft: 14817 corp: 6/132b lim: 80 exec/s: 10 rss: 70Mb L: 37/50 MS: 1 EraseBytes- 00:12:10.486 [2024-04-24 10:07:23.733602] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:10.777 #15 NEW cov: 10746 ft: 15019 corp: 7/147b lim: 80 exec/s: 15 rss: 70Mb L: 15/50 MS: 5 EraseBytes-ChangeByte-ChangeBinInt-ShuffleBytes-InsertRepeatedBytes- 00:12:10.777 [2024-04-24 10:07:23.920527] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:10.777 #16 NEW cov: 10746 ft: 15960 corp: 8/205b lim: 80 exec/s: 16 rss: 71Mb L: 58/58 MS: 1 InsertRepeatedBytes- 00:12:11.036 [2024-04-24 10:07:24.103520] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:11.036 #17 NEW cov: 10746 ft: 16166 corp: 9/219b lim: 80 exec/s: 17 rss: 71Mb L: 14/58 MS: 1 EraseBytes- 00:12:11.036 [2024-04-24 10:07:24.287306] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:11.294 #18 NEW cov: 10753 ft: 16512 corp: 10/269b lim: 80 exec/s: 18 rss: 71Mb L: 50/58 MS: 1 CrossOver- 00:12:11.294 [2024-04-24 10:07:24.468459] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:11.553 #19 NEW cov: 10753 ft: 16978 corp: 11/299b lim: 80 exec/s: 9 rss: 71Mb L: 30/58 MS: 1 CrossOver- 00:12:11.553 #19 DONE cov: 10753 ft: 16978 corp: 11/299b lim: 80 exec/s: 9 rss: 71Mb 00:12:11.553 Done 19 runs in 2 second(s) 00:12:11.813 10:07:24 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-2 00:12:11.813 10:07:24 -- ../common.sh@72 -- # (( i++ )) 00:12:11.813 10:07:24 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:11.813 10:07:24 -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:12:11.813 10:07:24 -- vfio/run.sh@22 -- # local fuzzer_type=3 00:12:11.813 10:07:24 -- vfio/run.sh@23 -- # local timen=1 00:12:11.813 10:07:24 -- vfio/run.sh@24 -- # local core=0x1 00:12:11.813 10:07:24 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:12:11.813 10:07:24 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:12:11.813 10:07:24 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:12:11.813 10:07:24 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:12:11.813 10:07:24 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:12:11.813 10:07:24 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:12:11.813 10:07:24 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:12:11.813 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:11.813 10:07:24 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:12:11.813 [2024-04-24 10:07:24.923443] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:12:11.813 [2024-04-24 10:07:24.923517] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178204 ] 00:12:11.813 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.813 [2024-04-24 10:07:25.003320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.813 [2024-04-24 10:07:25.086220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:11.813 [2024-04-24 10:07:25.086375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.072 INFO: Running with entropic power schedule (0xFF, 100). 00:12:12.072 INFO: Seed: 3912219913 00:12:12.072 INFO: Loaded 1 modules (338449 inline 8-bit counters): 338449 [0x27c7fcc, 0x281a9dd), 00:12:12.072 INFO: Loaded 1 PC tables (338449 PCs): 338449 [0x281a9e0,0x2d44af0), 00:12:12.072 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:12:12.072 INFO: A corpus is not provided, starting from an empty corpus 00:12:12.072 #2 INITED exec/s: 0 rss: 62Mb 00:12:12.072 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:12.072 This may also happen if the target rejected all inputs we tried so far 00:12:12.591 NEW_FUNC[1/622]: 0x480c30 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:125 00:12:12.591 NEW_FUNC[2/622]: 0x485170 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:12.591 #17 NEW cov: 10694 ft: 10662 corp: 2/66b lim: 320 exec/s: 0 rss: 69Mb L: 65/65 MS: 5 CMP-ShuffleBytes-EraseBytes-ShuffleBytes-InsertRepeatedBytes- DE: "\000\011+pC\202\227\360"- 00:12:12.849 #18 NEW cov: 10708 ft: 14367 corp: 3/132b lim: 320 exec/s: 0 rss: 70Mb L: 66/66 MS: 1 InsertByte- 00:12:13.107 NEW_FUNC[1/1]: 0x19276a0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:12:13.107 #19 NEW cov: 10725 ft: 15465 corp: 4/198b lim: 320 exec/s: 0 rss: 70Mb L: 66/66 MS: 1 ShuffleBytes- 00:12:13.107 #20 NEW cov: 10725 ft: 15723 corp: 5/318b lim: 320 exec/s: 20 rss: 71Mb L: 120/120 MS: 1 InsertRepeatedBytes- 00:12:13.366 #21 NEW cov: 10725 ft: 15840 corp: 6/385b lim: 320 exec/s: 21 rss: 71Mb L: 67/120 MS: 1 CrossOver- 00:12:13.624 #22 NEW cov: 10725 ft: 15940 corp: 7/451b lim: 320 exec/s: 22 rss: 71Mb L: 66/120 MS: 1 CopyPart- 00:12:13.883 #23 NEW cov: 10725 ft: 16108 corp: 8/517b lim: 320 exec/s: 23 rss: 71Mb L: 66/120 MS: 1 CMP- DE: "e\000\000\000\000\000\000\000"- 00:12:13.883 #24 NEW cov: 10732 ft: 16210 corp: 9/599b lim: 320 exec/s: 24 rss: 71Mb L: 82/120 MS: 1 InsertRepeatedBytes- 00:12:14.143 #25 NEW cov: 10732 ft: 16553 corp: 10/665b lim: 320 exec/s: 25 rss: 71Mb L: 66/120 MS: 1 CrossOver- 00:12:14.402 #26 NEW cov: 10732 ft: 16895 corp: 11/731b lim: 320 exec/s: 13 rss: 71Mb L: 66/120 MS: 1 ChangeByte- 00:12:14.402 #26 DONE cov: 10732 ft: 16895 corp: 11/731b lim: 320 exec/s: 13 rss: 71Mb 00:12:14.402 ###### Recommended dictionary. ###### 00:12:14.402 "\000\011+pC\202\227\360" # Uses: 0 00:12:14.402 "e\000\000\000\000\000\000\000" # Uses: 0 00:12:14.402 ###### End of recommended dictionary. ###### 00:12:14.402 Done 26 runs in 2 second(s) 00:12:14.661 10:07:27 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-3 00:12:14.661 10:07:27 -- ../common.sh@72 -- # (( i++ )) 00:12:14.661 10:07:27 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:14.661 10:07:27 -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:12:14.661 10:07:27 -- vfio/run.sh@22 -- # local fuzzer_type=4 00:12:14.661 10:07:27 -- vfio/run.sh@23 -- # local timen=1 00:12:14.661 10:07:27 -- vfio/run.sh@24 -- # local core=0x1 00:12:14.661 10:07:27 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:12:14.661 10:07:27 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:12:14.661 10:07:27 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:12:14.661 10:07:27 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:12:14.661 10:07:27 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:12:14.661 10:07:27 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:12:14.661 10:07:27 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:12:14.661 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:14.661 10:07:27 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:12:14.661 [2024-04-24 10:07:27.837483] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:12:14.661 [2024-04-24 10:07:27.837556] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178575 ] 00:12:14.661 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.661 [2024-04-24 10:07:27.919724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.919 [2024-04-24 10:07:28.008220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:14.919 [2024-04-24 10:07:28.008370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.178 INFO: Running with entropic power schedule (0xFF, 100). 00:12:15.178 INFO: Seed: 2546244556 00:12:15.178 INFO: Loaded 1 modules (338449 inline 8-bit counters): 338449 [0x27c7fcc, 0x281a9dd), 00:12:15.178 INFO: Loaded 1 PC tables (338449 PCs): 338449 [0x281a9e0,0x2d44af0), 00:12:15.178 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:12:15.178 INFO: A corpus is not provided, starting from an empty corpus 00:12:15.178 #2 INITED exec/s: 0 rss: 62Mb 00:12:15.178 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:15.178 This may also happen if the target rejected all inputs we tried so far 00:12:15.696 NEW_FUNC[1/622]: 0x4814b0 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:145 00:12:15.696 NEW_FUNC[2/622]: 0x485170 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:15.696 #7 NEW cov: 10686 ft: 10643 corp: 2/99b lim: 320 exec/s: 0 rss: 68Mb L: 98/98 MS: 5 ChangeBit-ShuffleBytes-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:12:15.696 #8 NEW cov: 10703 ft: 13636 corp: 3/197b lim: 320 exec/s: 0 rss: 70Mb L: 98/98 MS: 1 ShuffleBytes- 00:12:15.955 NEW_FUNC[1/1]: 0x19276a0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:12:15.955 #9 NEW cov: 10720 ft: 15138 corp: 4/295b lim: 320 exec/s: 0 rss: 70Mb L: 98/98 MS: 1 ChangeBinInt- 00:12:16.214 #10 NEW cov: 10720 ft: 15336 corp: 5/393b lim: 320 exec/s: 10 rss: 70Mb L: 98/98 MS: 1 CrossOver- 00:12:16.473 #11 NEW cov: 10720 ft: 15919 corp: 6/552b lim: 320 exec/s: 11 rss: 70Mb L: 159/159 MS: 1 InsertRepeatedBytes- 00:12:16.473 #12 NEW cov: 10720 ft: 16401 corp: 7/651b lim: 320 exec/s: 12 rss: 71Mb L: 99/159 MS: 1 InsertByte- 00:12:16.731 #13 NEW cov: 10720 ft: 16474 corp: 8/751b lim: 320 exec/s: 13 rss: 71Mb L: 100/159 MS: 1 CrossOver- 00:12:16.991 #14 NEW cov: 10727 ft: 16670 corp: 9/849b lim: 320 exec/s: 14 rss: 71Mb L: 98/159 MS: 1 ChangeBinInt- 00:12:16.991 #15 NEW cov: 10727 ft: 16700 corp: 10/949b lim: 320 exec/s: 7 rss: 71Mb L: 100/159 MS: 1 ChangeByte- 00:12:16.991 #15 DONE cov: 10727 ft: 16700 corp: 10/949b lim: 320 exec/s: 7 rss: 71Mb 00:12:16.991 Done 15 runs in 2 second(s) 00:12:17.559 10:07:30 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-4 00:12:17.559 10:07:30 -- ../common.sh@72 -- # (( i++ )) 00:12:17.559 10:07:30 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:17.559 10:07:30 -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:12:17.559 10:07:30 -- vfio/run.sh@22 -- # local fuzzer_type=5 00:12:17.559 10:07:30 -- vfio/run.sh@23 -- # local timen=1 00:12:17.559 10:07:30 -- vfio/run.sh@24 -- # local core=0x1 00:12:17.559 10:07:30 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:12:17.559 10:07:30 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:12:17.559 10:07:30 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:12:17.559 10:07:30 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:12:17.559 10:07:30 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:12:17.559 10:07:30 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:12:17.559 10:07:30 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:12:17.559 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:17.559 10:07:30 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:12:17.559 [2024-04-24 10:07:30.593913] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:12:17.559 [2024-04-24 10:07:30.594002] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178954 ] 00:12:17.559 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.559 [2024-04-24 10:07:30.675057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.559 [2024-04-24 10:07:30.760148] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:17.559 [2024-04-24 10:07:30.760303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.819 INFO: Running with entropic power schedule (0xFF, 100). 00:12:17.819 INFO: Seed: 991280007 00:12:17.819 INFO: Loaded 1 modules (338449 inline 8-bit counters): 338449 [0x27c7fcc, 0x281a9dd), 00:12:17.819 INFO: Loaded 1 PC tables (338449 PCs): 338449 [0x281a9e0,0x2d44af0), 00:12:17.819 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:12:17.819 INFO: A corpus is not provided, starting from an empty corpus 00:12:17.819 #2 INITED exec/s: 0 rss: 62Mb 00:12:17.819 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:17.819 This may also happen if the target rejected all inputs we tried so far 00:12:17.819 [2024-04-24 10:07:31.025136] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:17.819 [2024-04-24 10:07:31.025186] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:18.336 NEW_FUNC[1/628]: 0x481eb0 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:172 00:12:18.336 NEW_FUNC[2/628]: 0x485170 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:18.336 #9 NEW cov: 10724 ft: 10283 corp: 2/114b lim: 120 exec/s: 0 rss: 68Mb L: 113/113 MS: 2 ChangeByte-InsertRepeatedBytes- 00:12:18.336 [2024-04-24 10:07:31.451121] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:18.336 [2024-04-24 10:07:31.451176] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:18.336 #10 NEW cov: 10738 ft: 12721 corp: 3/177b lim: 120 exec/s: 0 rss: 69Mb L: 63/113 MS: 1 CrossOver- 00:12:18.336 [2024-04-24 10:07:31.566038] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:18.336 [2024-04-24 10:07:31.566082] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:18.595 #15 NEW cov: 10738 ft: 13936 corp: 4/223b lim: 120 exec/s: 0 rss: 70Mb L: 46/113 MS: 5 CrossOver-CrossOver-ChangeByte-CopyPart-InsertRepeatedBytes- 00:12:18.595 [2024-04-24 10:07:31.691846] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:18.595 [2024-04-24 10:07:31.691883] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:18.595 #21 NEW cov: 10738 ft: 14171 corp: 5/319b lim: 120 exec/s: 0 rss: 70Mb L: 96/113 MS: 1 CrossOver- 00:12:18.595 [2024-04-24 10:07:31.807735] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:18.595 [2024-04-24 10:07:31.807771] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:18.853 #27 NEW cov: 10738 ft: 14414 corp: 6/365b lim: 120 exec/s: 0 rss: 70Mb L: 46/113 MS: 1 ChangeBinInt- 00:12:18.853 [2024-04-24 10:07:31.921529] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:18.853 [2024-04-24 10:07:31.921565] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:18.853 #31 NEW cov: 10738 ft: 14740 corp: 7/452b lim: 120 exec/s: 31 rss: 70Mb L: 87/113 MS: 4 ChangeByte-InsertByte-EraseBytes-CrossOver- 00:12:18.853 [2024-04-24 10:07:32.054737] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:18.853 [2024-04-24 10:07:32.054778] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:19.113 #32 NEW cov: 10738 ft: 15066 corp: 8/505b lim: 120 exec/s: 32 rss: 70Mb L: 53/113 MS: 1 InsertRepeatedBytes- 00:12:19.113 [2024-04-24 10:07:32.176150] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:19.113 [2024-04-24 10:07:32.176188] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:19.113 #33 NEW cov: 10738 ft: 15140 corp: 9/568b lim: 120 exec/s: 33 rss: 70Mb L: 63/113 MS: 1 ShuffleBytes- 00:12:19.113 [2024-04-24 10:07:32.291230] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:19.113 [2024-04-24 10:07:32.291265] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:19.113 #34 NEW cov: 10738 ft: 15881 corp: 10/614b lim: 120 exec/s: 34 rss: 70Mb L: 46/113 MS: 1 ChangeByte- 00:12:19.371 [2024-04-24 10:07:32.405262] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:19.371 [2024-04-24 10:07:32.405298] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:19.371 #35 NEW cov: 10738 ft: 15947 corp: 11/660b lim: 120 exec/s: 35 rss: 70Mb L: 46/113 MS: 1 ChangeASCIIInt- 00:12:19.371 [2024-04-24 10:07:32.520260] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:19.371 [2024-04-24 10:07:32.520298] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:19.371 #36 NEW cov: 10738 ft: 16056 corp: 12/740b lim: 120 exec/s: 36 rss: 70Mb L: 80/113 MS: 1 InsertRepeatedBytes- 00:12:19.371 [2024-04-24 10:07:32.634311] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:19.371 [2024-04-24 10:07:32.634343] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:19.629 #37 NEW cov: 10738 ft: 16223 corp: 13/786b lim: 120 exec/s: 37 rss: 71Mb L: 46/113 MS: 1 ChangeByte- 00:12:19.629 [2024-04-24 10:07:32.748146] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:19.629 [2024-04-24 10:07:32.748182] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:19.629 NEW_FUNC[1/1]: 0x19276a0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:12:19.629 #43 NEW cov: 10761 ft: 16386 corp: 14/849b lim: 120 exec/s: 43 rss: 71Mb L: 63/113 MS: 1 ChangeBit- 00:12:19.629 [2024-04-24 10:07:32.862870] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:19.629 [2024-04-24 10:07:32.862905] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:19.888 #44 NEW cov: 10761 ft: 16698 corp: 15/962b lim: 120 exec/s: 44 rss: 71Mb L: 113/113 MS: 1 ShuffleBytes- 00:12:19.888 [2024-04-24 10:07:32.977754] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:19.888 [2024-04-24 10:07:32.977788] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:19.888 #45 NEW cov: 10761 ft: 16811 corp: 16/1025b lim: 120 exec/s: 22 rss: 71Mb L: 63/113 MS: 1 ChangeBinInt- 00:12:19.888 #45 DONE cov: 10761 ft: 16811 corp: 16/1025b lim: 120 exec/s: 22 rss: 71Mb 00:12:19.888 Done 45 runs in 2 second(s) 00:12:20.148 10:07:33 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-5 00:12:20.148 10:07:33 -- ../common.sh@72 -- # (( i++ )) 00:12:20.148 10:07:33 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:20.148 10:07:33 -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:12:20.148 10:07:33 -- vfio/run.sh@22 -- # local fuzzer_type=6 00:12:20.148 10:07:33 -- vfio/run.sh@23 -- # local timen=1 00:12:20.148 10:07:33 -- vfio/run.sh@24 -- # local core=0x1 00:12:20.148 10:07:33 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:12:20.148 10:07:33 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:12:20.148 10:07:33 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:12:20.148 10:07:33 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:12:20.148 10:07:33 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:12:20.148 10:07:33 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:12:20.148 10:07:33 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:12:20.148 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:20.148 10:07:33 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:12:20.148 [2024-04-24 10:07:33.378533] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:12:20.148 [2024-04-24 10:07:33.378612] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179421 ] 00:12:20.148 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.408 [2024-04-24 10:07:33.459265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.408 [2024-04-24 10:07:33.537491] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:20.408 [2024-04-24 10:07:33.537634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.667 INFO: Running with entropic power schedule (0xFF, 100). 00:12:20.667 INFO: Seed: 3773272292 00:12:20.667 INFO: Loaded 1 modules (338449 inline 8-bit counters): 338449 [0x27c7fcc, 0x281a9dd), 00:12:20.667 INFO: Loaded 1 PC tables (338449 PCs): 338449 [0x281a9e0,0x2d44af0), 00:12:20.667 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:12:20.667 INFO: A corpus is not provided, starting from an empty corpus 00:12:20.667 #2 INITED exec/s: 0 rss: 61Mb 00:12:20.667 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:20.667 This may also happen if the target rejected all inputs we tried so far 00:12:20.667 [2024-04-24 10:07:33.830108] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:20.667 [2024-04-24 10:07:33.830154] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:21.184 NEW_FUNC[1/628]: 0x482ba0 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:12:21.184 NEW_FUNC[2/628]: 0x485170 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:21.184 #10 NEW cov: 10712 ft: 10413 corp: 2/56b lim: 90 exec/s: 0 rss: 68Mb L: 55/55 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:12:21.184 [2024-04-24 10:07:34.302495] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:21.184 [2024-04-24 10:07:34.302546] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:21.184 #11 NEW cov: 10726 ft: 14141 corp: 3/111b lim: 90 exec/s: 0 rss: 70Mb L: 55/55 MS: 1 ShuffleBytes- 00:12:21.442 [2024-04-24 10:07:34.490321] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:21.442 [2024-04-24 10:07:34.490359] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:21.442 NEW_FUNC[1/1]: 0x19276a0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:12:21.442 #12 NEW cov: 10743 ft: 14582 corp: 4/166b lim: 90 exec/s: 0 rss: 70Mb L: 55/55 MS: 1 ChangeBit- 00:12:21.442 [2024-04-24 10:07:34.684599] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:21.442 [2024-04-24 10:07:34.684635] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:21.700 #13 NEW cov: 10743 ft: 14703 corp: 5/222b lim: 90 exec/s: 13 rss: 70Mb L: 56/56 MS: 1 InsertByte- 00:12:21.700 [2024-04-24 10:07:34.877463] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:21.701 [2024-04-24 10:07:34.877499] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:21.959 #14 NEW cov: 10743 ft: 15709 corp: 6/277b lim: 90 exec/s: 14 rss: 70Mb L: 55/56 MS: 1 ShuffleBytes- 00:12:21.959 [2024-04-24 10:07:35.071768] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:21.959 [2024-04-24 10:07:35.071801] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:21.959 #15 NEW cov: 10743 ft: 16077 corp: 7/333b lim: 90 exec/s: 15 rss: 70Mb L: 56/56 MS: 1 ChangeBit- 00:12:22.218 [2024-04-24 10:07:35.264767] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:22.218 [2024-04-24 10:07:35.264799] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:22.218 #16 NEW cov: 10743 ft: 16452 corp: 8/372b lim: 90 exec/s: 16 rss: 70Mb L: 39/56 MS: 1 EraseBytes- 00:12:22.218 [2024-04-24 10:07:35.451192] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:22.218 [2024-04-24 10:07:35.451224] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:22.477 #17 NEW cov: 10750 ft: 16520 corp: 9/452b lim: 90 exec/s: 17 rss: 71Mb L: 80/80 MS: 1 CopyPart- 00:12:22.477 [2024-04-24 10:07:35.637259] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:22.477 [2024-04-24 10:07:35.637290] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:22.736 #18 NEW cov: 10750 ft: 16679 corp: 10/497b lim: 90 exec/s: 9 rss: 71Mb L: 45/80 MS: 1 InsertRepeatedBytes- 00:12:22.736 #18 DONE cov: 10750 ft: 16679 corp: 10/497b lim: 90 exec/s: 9 rss: 71Mb 00:12:22.736 Done 18 runs in 2 second(s) 00:12:22.996 10:07:36 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-6 00:12:22.996 10:07:36 -- ../common.sh@72 -- # (( i++ )) 00:12:22.996 10:07:36 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:22.996 10:07:36 -- vfio/run.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:12:22.996 00:12:22.996 real 0m19.847s 00:12:22.996 user 0m27.413s 00:12:22.996 sys 0m1.944s 00:12:22.996 10:07:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.996 10:07:36 -- common/autotest_common.sh@10 -- # set +x 00:12:22.996 ************************************ 00:12:22.996 END TEST vfio_fuzz 00:12:22.996 ************************************ 00:12:22.996 10:07:36 -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:12:22.996 00:12:22.996 real 1m27.748s 00:12:22.996 user 2m8.038s 00:12:22.996 sys 0m12.360s 00:12:22.996 10:07:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.996 10:07:36 -- common/autotest_common.sh@10 -- # set +x 00:12:22.996 ************************************ 00:12:22.996 END TEST llvm_fuzz 00:12:22.996 ************************************ 00:12:22.996 10:07:36 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:12:22.996 10:07:36 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:12:22.996 10:07:36 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:12:22.996 10:07:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:22.996 10:07:36 -- common/autotest_common.sh@10 -- # set +x 00:12:22.996 10:07:36 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:12:22.996 10:07:36 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:12:22.996 10:07:36 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:12:22.996 10:07:36 -- common/autotest_common.sh@10 -- # set +x 00:12:27.198 INFO: APP EXITING 00:12:27.198 INFO: killing all VMs 00:12:27.198 INFO: killing vhost app 00:12:27.198 INFO: EXIT DONE 00:12:30.489 Waiting for block devices as requested 00:12:30.489 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:12:30.489 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:12:30.489 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:12:30.489 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:12:30.489 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:12:30.489 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:12:30.489 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:12:30.747 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:12:30.747 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:12:30.747 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:12:31.007 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:12:31.007 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:12:31.007 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:12:31.266 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:12:31.266 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:12:31.266 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:12:31.524 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:12:36.844 Cleaning 00:12:36.844 Removing: /dev/shm/spdk_tgt_trace.pid1149141 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1146801 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1147939 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1149141 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1149728 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1149960 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1150284 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1150606 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1150838 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1151037 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1151241 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1151461 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1152182 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1154609 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1154823 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1155029 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1155209 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1155610 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1155785 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1156188 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1156370 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1156580 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1156758 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1156968 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1156997 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1157432 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1157630 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1157826 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1158055 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1158273 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1158300 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1158406 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1158627 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1158855 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1159035 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1159280 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1159461 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1159658 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1159842 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1160040 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1160228 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1160422 00:12:36.844 Removing: /var/run/dpdk/spdk_pid1160604 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1160804 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1160984 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1161177 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1161366 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1161563 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1161743 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1161946 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1162126 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1162319 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1162505 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1162701 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1162885 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1163105 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1163287 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1163520 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1163709 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1163935 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1164132 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1164359 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1164568 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1164772 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1164954 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1165158 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1165342 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1165540 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1165724 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1166019 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1166229 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1166526 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1166821 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1167305 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1167753 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1168139 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1168498 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1168873 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1169255 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1169680 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1170039 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1170403 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1170772 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1171146 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1171507 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1171887 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1172254 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1172629 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1172988 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1173325 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1173722 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1174092 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1174459 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1174823 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1175190 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1175560 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1175926 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1176294 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1176666 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1177098 00:12:37.103 Removing: /var/run/dpdk/spdk_pid1177463 00:12:37.362 Removing: /var/run/dpdk/spdk_pid1177833 00:12:37.362 Removing: /var/run/dpdk/spdk_pid1178204 00:12:37.362 Removing: /var/run/dpdk/spdk_pid1178575 00:12:37.362 Removing: /var/run/dpdk/spdk_pid1178954 00:12:37.362 Removing: /var/run/dpdk/spdk_pid1179421 00:12:37.362 Clean 00:12:37.362 killing process with pid 1100075 00:12:39.266 killing process with pid 1100072 00:12:39.266 killing process with pid 1100074 00:12:39.266 killing process with pid 1100073 00:12:39.266 10:07:52 -- common/autotest_common.sh@1436 -- # return 0 00:12:39.266 10:07:52 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:12:39.266 10:07:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:39.266 10:07:52 -- common/autotest_common.sh@10 -- # set +x 00:12:39.525 10:07:52 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:12:39.525 10:07:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:39.525 10:07:52 -- common/autotest_common.sh@10 -- # set +x 00:12:39.525 10:07:52 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:12:39.525 10:07:52 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:12:39.525 10:07:52 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:12:39.525 10:07:52 -- spdk/autotest.sh@394 -- # hash lcov 00:12:39.525 10:07:52 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:12:39.525 10:07:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:12:39.525 10:07:52 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:12:39.525 10:07:52 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.525 10:07:52 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.525 10:07:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.525 10:07:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.525 10:07:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.525 10:07:52 -- paths/export.sh@5 -- $ export PATH 00:12:39.525 10:07:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.525 10:07:52 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:12:39.525 10:07:52 -- common/autobuild_common.sh@435 -- $ date +%s 00:12:39.525 10:07:52 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713946072.XXXXXX 00:12:39.525 10:07:52 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713946072.bs64wX 00:12:39.525 10:07:52 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:12:39.525 10:07:52 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:12:39.525 10:07:52 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:12:39.525 10:07:52 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:12:39.525 10:07:52 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:12:39.525 10:07:52 -- common/autobuild_common.sh@451 -- $ get_config_params 00:12:39.525 10:07:52 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:12:39.525 10:07:52 -- common/autotest_common.sh@10 -- $ set +x 00:12:39.525 10:07:52 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:12:39.525 10:07:52 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:12:39.525 10:07:52 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:12:39.525 10:07:52 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:12:39.525 10:07:52 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:12:39.525 10:07:52 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:12:39.525 10:07:52 -- spdk/autopackage.sh@19 -- $ timing_finish 00:12:39.525 10:07:52 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:12:39.525 10:07:52 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:12:39.525 10:07:52 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:12:39.525 10:07:52 -- spdk/autopackage.sh@20 -- $ exit 0 00:12:39.525 + [[ -n 1056185 ]] 00:12:39.525 + sudo kill 1056185 00:12:39.535 [Pipeline] } 00:12:39.553 [Pipeline] // stage 00:12:39.559 [Pipeline] } 00:12:39.577 [Pipeline] // timeout 00:12:39.581 [Pipeline] } 00:12:39.597 [Pipeline] // catchError 00:12:39.601 [Pipeline] } 00:12:39.617 [Pipeline] // wrap 00:12:39.621 [Pipeline] } 00:12:39.636 [Pipeline] // catchError 00:12:39.647 [Pipeline] stage 00:12:39.650 [Pipeline] { (Epilogue) 00:12:39.664 [Pipeline] catchError 00:12:39.665 [Pipeline] { 00:12:39.677 [Pipeline] echo 00:12:39.679 Cleanup processes 00:12:39.682 [Pipeline] sh 00:12:39.960 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:12:39.960 1100113 tee /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pm.log 00:12:39.960 1100119 tee /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pm.log 00:12:39.960 1186253 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:12:39.974 [Pipeline] sh 00:12:40.258 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:12:40.258 ++ grep -v 'sudo pgrep' 00:12:40.258 ++ awk '{print $1}' 00:12:40.258 + sudo kill -9 00:12:40.258 + true 00:12:40.270 [Pipeline] sh 00:12:40.555 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:12:41.504 [Pipeline] sh 00:12:41.787 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:12:41.787 Artifacts sizes are good 00:12:41.801 [Pipeline] archiveArtifacts 00:12:41.808 Archiving artifacts 00:12:41.890 [Pipeline] sh 00:12:42.178 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:12:42.194 [Pipeline] cleanWs 00:12:42.204 [WS-CLEANUP] Deleting project workspace... 00:12:42.205 [WS-CLEANUP] Deferred wipeout is used... 00:12:42.211 [WS-CLEANUP] done 00:12:42.213 [Pipeline] } 00:12:42.235 [Pipeline] // catchError 00:12:42.249 [Pipeline] sh 00:12:42.532 + logger -p user.info -t JENKINS-CI 00:12:42.541 [Pipeline] } 00:12:42.559 [Pipeline] // stage 00:12:42.565 [Pipeline] } 00:12:42.584 [Pipeline] // node 00:12:42.590 [Pipeline] End of Pipeline 00:12:42.648 Finished: SUCCESS